Test Report: KVM_Linux_crio 20349

                    
                      489ab7de64945da673e8d97ced0c6161a23ed74f:2025-04-14:39139
                    
                

Test fail (10/321)

x
+
TestAddons/parallel/Ingress (151.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-411768 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-411768 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-411768 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c6fdc475-449b-4a8c-a72c-3d42ef531b1c] Pending
helpers_test.go:344: "nginx" [c6fdc475-449b-4a8c-a72c-3d42ef531b1c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c6fdc475-449b-4a8c-a72c-3d42ef531b1c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00420126s
I0414 16:34:17.622860  156633 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-411768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.218295237s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-411768 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.237
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-411768 -n addons-411768
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 logs -n 25: (1.399880948s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-356094                                                                     | download-only-356094 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
	| delete  | -p download-only-383049                                                                     | download-only-383049 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
	| delete  | -p download-only-356094                                                                     | download-only-356094 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-396839 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC |                     |
	|         | binary-mirror-396839                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35155                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-396839                                                                     | binary-mirror-396839 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
	| addons  | enable dashboard -p                                                                         | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC |                     |
	|         | addons-411768                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC |                     |
	|         | addons-411768                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-411768 --wait=true                                                                | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-411768 addons disable                                                                | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:33 UTC | 14 Apr 25 16:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-411768 addons disable                                                                | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:33 UTC | 14 Apr 25 16:33 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:33 UTC | 14 Apr 25 16:33 UTC |
	|         | -p addons-411768                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-411768 addons                                                                        | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:33 UTC | 14 Apr 25 16:33 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-411768 addons                                                                        | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-411768 addons disable                                                                | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-411768 ip                                                                            | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	| addons  | addons-411768 addons disable                                                                | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-411768 addons disable                                                                | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-411768 ssh curl -s                                                                   | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-411768 ssh cat                                                                       | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | /opt/local-path-provisioner/pvc-0223bcea-7c20-4f57-890f-2ceeb26fd209_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-411768 addons disable                                                                | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-411768 addons                                                                        | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-411768 addons                                                                        | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-411768 addons                                                                        | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-411768 addons                                                                        | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-411768 ip                                                                            | addons-411768        | jenkins | v1.35.0 | 14 Apr 25 16:36 UTC | 14 Apr 25 16:36 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 16:31:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 16:31:18.165165  157245 out.go:345] Setting OutFile to fd 1 ...
	I0414 16:31:18.165434  157245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:31:18.165444  157245 out.go:358] Setting ErrFile to fd 2...
	I0414 16:31:18.165448  157245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:31:18.165601  157245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 16:31:18.166170  157245 out.go:352] Setting JSON to false
	I0414 16:31:18.167043  157245 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4376,"bootTime":1744643902,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 16:31:18.167148  157245 start.go:139] virtualization: kvm guest
	I0414 16:31:18.168712  157245 out.go:177] * [addons-411768] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 16:31:18.169729  157245 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 16:31:18.169739  157245 notify.go:220] Checking for updates...
	I0414 16:31:18.171747  157245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 16:31:18.172941  157245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 16:31:18.173968  157245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 16:31:18.175143  157245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 16:31:18.176137  157245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 16:31:18.177181  157245 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 16:31:18.207857  157245 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 16:31:18.208872  157245 start.go:297] selected driver: kvm2
	I0414 16:31:18.208881  157245 start.go:901] validating driver "kvm2" against <nil>
	I0414 16:31:18.208891  157245 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 16:31:18.209581  157245 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 16:31:18.209649  157245 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 16:31:18.224501  157245 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 16:31:18.224537  157245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 16:31:18.224794  157245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 16:31:18.224834  157245 cni.go:84] Creating CNI manager for ""
	I0414 16:31:18.224880  157245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 16:31:18.224893  157245 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 16:31:18.224954  157245 start.go:340] cluster config:
	{Name:addons-411768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-411768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 16:31:18.225100  157245 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 16:31:18.226589  157245 out.go:177] * Starting "addons-411768" primary control-plane node in "addons-411768" cluster
	I0414 16:31:18.227628  157245 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 16:31:18.227659  157245 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 16:31:18.227671  157245 cache.go:56] Caching tarball of preloaded images
	I0414 16:31:18.227745  157245 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 16:31:18.227756  157245 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 16:31:18.228095  157245 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/config.json ...
	I0414 16:31:18.228123  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/config.json: {Name:mkcb6309a4986d6ced4a41482792d346f9017346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:18.228247  157245 start.go:360] acquireMachinesLock for addons-411768: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 16:31:18.228309  157245 start.go:364] duration metric: took 47.488µs to acquireMachinesLock for "addons-411768"
	I0414 16:31:18.228330  157245 start.go:93] Provisioning new machine with config: &{Name:addons-411768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-411768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 16:31:18.228394  157245 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 16:31:18.229747  157245 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0414 16:31:18.229893  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:31:18.229939  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:31:18.243165  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0414 16:31:18.243642  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:31:18.244199  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:31:18.244218  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:31:18.244552  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:31:18.244706  157245 main.go:141] libmachine: (addons-411768) Calling .GetMachineName
	I0414 16:31:18.244843  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:31:18.244986  157245 start.go:159] libmachine.API.Create for "addons-411768" (driver="kvm2")
	I0414 16:31:18.245016  157245 client.go:168] LocalClient.Create starting
	I0414 16:31:18.245052  157245 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem
	I0414 16:31:18.541759  157245 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem
	I0414 16:31:18.635842  157245 main.go:141] libmachine: Running pre-create checks...
	I0414 16:31:18.635864  157245 main.go:141] libmachine: (addons-411768) Calling .PreCreateCheck
	I0414 16:31:18.636332  157245 main.go:141] libmachine: (addons-411768) Calling .GetConfigRaw
	I0414 16:31:18.636752  157245 main.go:141] libmachine: Creating machine...
	I0414 16:31:18.636768  157245 main.go:141] libmachine: (addons-411768) Calling .Create
	I0414 16:31:18.636945  157245 main.go:141] libmachine: (addons-411768) creating KVM machine...
	I0414 16:31:18.636964  157245 main.go:141] libmachine: (addons-411768) creating network...
	I0414 16:31:18.638220  157245 main.go:141] libmachine: (addons-411768) DBG | found existing default KVM network
	I0414 16:31:18.638957  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:18.638810  157267 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000208dd0}
	I0414 16:31:18.639002  157245 main.go:141] libmachine: (addons-411768) DBG | created network xml: 
	I0414 16:31:18.639032  157245 main.go:141] libmachine: (addons-411768) DBG | <network>
	I0414 16:31:18.639042  157245 main.go:141] libmachine: (addons-411768) DBG |   <name>mk-addons-411768</name>
	I0414 16:31:18.639047  157245 main.go:141] libmachine: (addons-411768) DBG |   <dns enable='no'/>
	I0414 16:31:18.639052  157245 main.go:141] libmachine: (addons-411768) DBG |   
	I0414 16:31:18.639057  157245 main.go:141] libmachine: (addons-411768) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 16:31:18.639065  157245 main.go:141] libmachine: (addons-411768) DBG |     <dhcp>
	I0414 16:31:18.639070  157245 main.go:141] libmachine: (addons-411768) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 16:31:18.639075  157245 main.go:141] libmachine: (addons-411768) DBG |     </dhcp>
	I0414 16:31:18.639081  157245 main.go:141] libmachine: (addons-411768) DBG |   </ip>
	I0414 16:31:18.639092  157245 main.go:141] libmachine: (addons-411768) DBG |   
	I0414 16:31:18.639103  157245 main.go:141] libmachine: (addons-411768) DBG | </network>
	I0414 16:31:18.639128  157245 main.go:141] libmachine: (addons-411768) DBG | 
	I0414 16:31:18.643909  157245 main.go:141] libmachine: (addons-411768) DBG | trying to create private KVM network mk-addons-411768 192.168.39.0/24...
	I0414 16:31:18.706993  157245 main.go:141] libmachine: (addons-411768) DBG | private KVM network mk-addons-411768 192.168.39.0/24 created
	I0414 16:31:18.707043  157245 main.go:141] libmachine: (addons-411768) setting up store path in /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768 ...
	I0414 16:31:18.707055  157245 main.go:141] libmachine: (addons-411768) building disk image from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 16:31:18.707068  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:18.706969  157267 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 16:31:18.707216  157245 main.go:141] libmachine: (addons-411768) Downloading /home/jenkins/minikube-integration/20349-149500/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 16:31:18.959419  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:18.959292  157267 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa...
	I0414 16:31:19.317156  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:19.316992  157267 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/addons-411768.rawdisk...
	I0414 16:31:19.317187  157245 main.go:141] libmachine: (addons-411768) DBG | Writing magic tar header
	I0414 16:31:19.317201  157245 main.go:141] libmachine: (addons-411768) DBG | Writing SSH key tar header
	I0414 16:31:19.317212  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:19.317160  157267 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768 ...
	I0414 16:31:19.317318  157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768
	I0414 16:31:19.317339  157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines
	I0414 16:31:19.317352  157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768 (perms=drwx------)
	I0414 16:31:19.317368  157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines (perms=drwxr-xr-x)
	I0414 16:31:19.317379  157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube (perms=drwxr-xr-x)
	I0414 16:31:19.317390  157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration/20349-149500 (perms=drwxrwxr-x)
	I0414 16:31:19.317403  157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 16:31:19.317415  157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 16:31:19.317431  157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 16:31:19.317440  157245 main.go:141] libmachine: (addons-411768) creating domain...
	I0414 16:31:19.317454  157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500
	I0414 16:31:19.317465  157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 16:31:19.317473  157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins
	I0414 16:31:19.317483  157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home
	I0414 16:31:19.317493  157245 main.go:141] libmachine: (addons-411768) DBG | skipping /home - not owner
	I0414 16:31:19.318491  157245 main.go:141] libmachine: (addons-411768) define libvirt domain using xml: 
	I0414 16:31:19.318516  157245 main.go:141] libmachine: (addons-411768) <domain type='kvm'>
	I0414 16:31:19.318528  157245 main.go:141] libmachine: (addons-411768)   <name>addons-411768</name>
	I0414 16:31:19.318536  157245 main.go:141] libmachine: (addons-411768)   <memory unit='MiB'>4000</memory>
	I0414 16:31:19.318568  157245 main.go:141] libmachine: (addons-411768)   <vcpu>2</vcpu>
	I0414 16:31:19.318579  157245 main.go:141] libmachine: (addons-411768)   <features>
	I0414 16:31:19.318594  157245 main.go:141] libmachine: (addons-411768)     <acpi/>
	I0414 16:31:19.318607  157245 main.go:141] libmachine: (addons-411768)     <apic/>
	I0414 16:31:19.318645  157245 main.go:141] libmachine: (addons-411768)     <pae/>
	I0414 16:31:19.318666  157245 main.go:141] libmachine: (addons-411768)     
	I0414 16:31:19.318676  157245 main.go:141] libmachine: (addons-411768)   </features>
	I0414 16:31:19.318689  157245 main.go:141] libmachine: (addons-411768)   <cpu mode='host-passthrough'>
	I0414 16:31:19.318714  157245 main.go:141] libmachine: (addons-411768)   
	I0414 16:31:19.318726  157245 main.go:141] libmachine: (addons-411768)   </cpu>
	I0414 16:31:19.318735  157245 main.go:141] libmachine: (addons-411768)   <os>
	I0414 16:31:19.318745  157245 main.go:141] libmachine: (addons-411768)     <type>hvm</type>
	I0414 16:31:19.318755  157245 main.go:141] libmachine: (addons-411768)     <boot dev='cdrom'/>
	I0414 16:31:19.318768  157245 main.go:141] libmachine: (addons-411768)     <boot dev='hd'/>
	I0414 16:31:19.318798  157245 main.go:141] libmachine: (addons-411768)     <bootmenu enable='no'/>
	I0414 16:31:19.318819  157245 main.go:141] libmachine: (addons-411768)   </os>
	I0414 16:31:19.318828  157245 main.go:141] libmachine: (addons-411768)   <devices>
	I0414 16:31:19.318851  157245 main.go:141] libmachine: (addons-411768)     <disk type='file' device='cdrom'>
	I0414 16:31:19.318869  157245 main.go:141] libmachine: (addons-411768)       <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/boot2docker.iso'/>
	I0414 16:31:19.318877  157245 main.go:141] libmachine: (addons-411768)       <target dev='hdc' bus='scsi'/>
	I0414 16:31:19.318899  157245 main.go:141] libmachine: (addons-411768)       <readonly/>
	I0414 16:31:19.318917  157245 main.go:141] libmachine: (addons-411768)     </disk>
	I0414 16:31:19.318934  157245 main.go:141] libmachine: (addons-411768)     <disk type='file' device='disk'>
	I0414 16:31:19.318949  157245 main.go:141] libmachine: (addons-411768)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 16:31:19.318967  157245 main.go:141] libmachine: (addons-411768)       <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/addons-411768.rawdisk'/>
	I0414 16:31:19.318978  157245 main.go:141] libmachine: (addons-411768)       <target dev='hda' bus='virtio'/>
	I0414 16:31:19.318990  157245 main.go:141] libmachine: (addons-411768)     </disk>
	I0414 16:31:19.319005  157245 main.go:141] libmachine: (addons-411768)     <interface type='network'>
	I0414 16:31:19.319027  157245 main.go:141] libmachine: (addons-411768)       <source network='mk-addons-411768'/>
	I0414 16:31:19.319038  157245 main.go:141] libmachine: (addons-411768)       <model type='virtio'/>
	I0414 16:31:19.319048  157245 main.go:141] libmachine: (addons-411768)     </interface>
	I0414 16:31:19.319058  157245 main.go:141] libmachine: (addons-411768)     <interface type='network'>
	I0414 16:31:19.319072  157245 main.go:141] libmachine: (addons-411768)       <source network='default'/>
	I0414 16:31:19.319092  157245 main.go:141] libmachine: (addons-411768)       <model type='virtio'/>
	I0414 16:31:19.319104  157245 main.go:141] libmachine: (addons-411768)     </interface>
	I0414 16:31:19.319119  157245 main.go:141] libmachine: (addons-411768)     <serial type='pty'>
	I0414 16:31:19.319130  157245 main.go:141] libmachine: (addons-411768)       <target port='0'/>
	I0414 16:31:19.319143  157245 main.go:141] libmachine: (addons-411768)     </serial>
	I0414 16:31:19.319154  157245 main.go:141] libmachine: (addons-411768)     <console type='pty'>
	I0414 16:31:19.319163  157245 main.go:141] libmachine: (addons-411768)       <target type='serial' port='0'/>
	I0414 16:31:19.319174  157245 main.go:141] libmachine: (addons-411768)     </console>
	I0414 16:31:19.319182  157245 main.go:141] libmachine: (addons-411768)     <rng model='virtio'>
	I0414 16:31:19.319195  157245 main.go:141] libmachine: (addons-411768)       <backend model='random'>/dev/random</backend>
	I0414 16:31:19.319213  157245 main.go:141] libmachine: (addons-411768)     </rng>
	I0414 16:31:19.319222  157245 main.go:141] libmachine: (addons-411768)     
	I0414 16:31:19.319231  157245 main.go:141] libmachine: (addons-411768)     
	I0414 16:31:19.319240  157245 main.go:141] libmachine: (addons-411768)   </devices>
	I0414 16:31:19.319249  157245 main.go:141] libmachine: (addons-411768) </domain>
	I0414 16:31:19.319258  157245 main.go:141] libmachine: (addons-411768) 
	I0414 16:31:19.322828  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:20:b7:3f in network default
	I0414 16:31:19.323315  157245 main.go:141] libmachine: (addons-411768) starting domain...
	I0414 16:31:19.323337  157245 main.go:141] libmachine: (addons-411768) ensuring networks are active...
	I0414 16:31:19.323348  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:19.323956  157245 main.go:141] libmachine: (addons-411768) Ensuring network default is active
	I0414 16:31:19.324246  157245 main.go:141] libmachine: (addons-411768) Ensuring network mk-addons-411768 is active
	I0414 16:31:19.324660  157245 main.go:141] libmachine: (addons-411768) getting domain XML...
	I0414 16:31:19.325289  157245 main.go:141] libmachine: (addons-411768) creating domain...
	I0414 16:31:20.494301  157245 main.go:141] libmachine: (addons-411768) waiting for IP...
	I0414 16:31:20.494965  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:20.495284  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:20.495349  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:20.495293  157267 retry.go:31] will retry after 312.32887ms: waiting for domain to come up
	I0414 16:31:20.808758  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:20.809228  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:20.809258  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:20.809189  157267 retry.go:31] will retry after 247.375577ms: waiting for domain to come up
	I0414 16:31:21.058635  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:21.059068  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:21.059097  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:21.059024  157267 retry.go:31] will retry after 466.453619ms: waiting for domain to come up
	I0414 16:31:21.526557  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:21.526927  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:21.526948  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:21.526904  157267 retry.go:31] will retry after 432.389693ms: waiting for domain to come up
	I0414 16:31:21.960377  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:21.960818  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:21.960856  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:21.960782  157267 retry.go:31] will retry after 547.701184ms: waiting for domain to come up
	I0414 16:31:22.511558  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:22.512082  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:22.512107  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:22.512048  157267 retry.go:31] will retry after 810.522572ms: waiting for domain to come up
	I0414 16:31:23.324254  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:23.324660  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:23.324703  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:23.324629  157267 retry.go:31] will retry after 1.103233225s: waiting for domain to come up
	I0414 16:31:24.429919  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:24.430378  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:24.430407  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:24.430318  157267 retry.go:31] will retry after 1.14528623s: waiting for domain to come up
	I0414 16:31:25.577617  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:25.577975  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:25.578005  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:25.577945  157267 retry.go:31] will retry after 1.858984681s: waiting for domain to come up
	I0414 16:31:27.438914  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:27.439378  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:27.439419  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:27.439356  157267 retry.go:31] will retry after 2.310241133s: waiting for domain to come up
	I0414 16:31:29.751300  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:29.751813  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:29.751857  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:29.751775  157267 retry.go:31] will retry after 2.494754123s: waiting for domain to come up
	I0414 16:31:32.249280  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:32.249780  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:32.249810  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:32.249761  157267 retry.go:31] will retry after 3.010871662s: waiting for domain to come up
	I0414 16:31:35.262847  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:35.263313  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:35.263358  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:35.263309  157267 retry.go:31] will retry after 3.112482414s: waiting for domain to come up
	I0414 16:31:38.377075  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:38.377413  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
	I0414 16:31:38.377463  157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:38.377417  157267 retry.go:31] will retry after 3.628902204s: waiting for domain to come up
	I0414 16:31:42.010099  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:42.010509  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has current primary IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:42.010533  157245 main.go:141] libmachine: (addons-411768) found domain IP: 192.168.39.237
	I0414 16:31:42.010554  157245 main.go:141] libmachine: (addons-411768) reserving static IP address...
	I0414 16:31:42.010919  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find host DHCP lease matching {name: "addons-411768", mac: "52:54:00:81:2d:89", ip: "192.168.39.237"} in network mk-addons-411768
	I0414 16:31:42.079547  157245 main.go:141] libmachine: (addons-411768) reserved static IP address 192.168.39.237 for domain addons-411768
	I0414 16:31:42.079576  157245 main.go:141] libmachine: (addons-411768) DBG | Getting to WaitForSSH function...
	I0414 16:31:42.079584  157245 main.go:141] libmachine: (addons-411768) waiting for SSH...
	I0414 16:31:42.081948  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:42.082267  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768
	I0414 16:31:42.082295  157245 main.go:141] libmachine: (addons-411768) DBG | unable to find defined IP address of network mk-addons-411768 interface with MAC address 52:54:00:81:2d:89
	I0414 16:31:42.082430  157245 main.go:141] libmachine: (addons-411768) DBG | Using SSH client type: external
	I0414 16:31:42.082459  157245 main.go:141] libmachine: (addons-411768) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa (-rw-------)
	I0414 16:31:42.082498  157245 main.go:141] libmachine: (addons-411768) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 16:31:42.082509  157245 main.go:141] libmachine: (addons-411768) DBG | About to run SSH command:
	I0414 16:31:42.082521  157245 main.go:141] libmachine: (addons-411768) DBG | exit 0
	I0414 16:31:42.086060  157245 main.go:141] libmachine: (addons-411768) DBG | SSH cmd err, output: exit status 255: 
	I0414 16:31:42.086087  157245 main.go:141] libmachine: (addons-411768) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0414 16:31:42.086094  157245 main.go:141] libmachine: (addons-411768) DBG | command : exit 0
	I0414 16:31:42.086100  157245 main.go:141] libmachine: (addons-411768) DBG | err     : exit status 255
	I0414 16:31:42.086108  157245 main.go:141] libmachine: (addons-411768) DBG | output  : 
	I0414 16:31:45.087759  157245 main.go:141] libmachine: (addons-411768) DBG | Getting to WaitForSSH function...
	I0414 16:31:45.090263  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.090581  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.090601  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.090752  157245 main.go:141] libmachine: (addons-411768) DBG | Using SSH client type: external
	I0414 16:31:45.090775  157245 main.go:141] libmachine: (addons-411768) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa (-rw-------)
	I0414 16:31:45.090806  157245 main.go:141] libmachine: (addons-411768) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 16:31:45.090821  157245 main.go:141] libmachine: (addons-411768) DBG | About to run SSH command:
	I0414 16:31:45.090848  157245 main.go:141] libmachine: (addons-411768) DBG | exit 0
	I0414 16:31:45.213395  157245 main.go:141] libmachine: (addons-411768) DBG | SSH cmd err, output: <nil>: 
	I0414 16:31:45.213700  157245 main.go:141] libmachine: (addons-411768) KVM machine creation complete
	I0414 16:31:45.213978  157245 main.go:141] libmachine: (addons-411768) Calling .GetConfigRaw
	I0414 16:31:45.214532  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:31:45.214711  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:31:45.214826  157245 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 16:31:45.214840  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:31:45.216023  157245 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 16:31:45.216034  157245 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 16:31:45.216039  157245 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 16:31:45.216044  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:45.218121  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.218473  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.218491  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.218615  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:45.218773  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.218930  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.219021  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:45.219166  157245 main.go:141] libmachine: Using SSH client type: native
	I0414 16:31:45.219368  157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0414 16:31:45.219377  157245 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 16:31:45.324619  157245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 16:31:45.324638  157245 main.go:141] libmachine: Detecting the provisioner...
	I0414 16:31:45.324644  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:45.327115  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.327424  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.327453  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.327610  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:45.327778  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.327895  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.328012  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:45.328154  157245 main.go:141] libmachine: Using SSH client type: native
	I0414 16:31:45.328426  157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0414 16:31:45.328439  157245 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 16:31:45.429915  157245 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 16:31:45.429997  157245 main.go:141] libmachine: found compatible host: buildroot
	I0414 16:31:45.430012  157245 main.go:141] libmachine: Provisioning with buildroot...
	I0414 16:31:45.430023  157245 main.go:141] libmachine: (addons-411768) Calling .GetMachineName
	I0414 16:31:45.430319  157245 buildroot.go:166] provisioning hostname "addons-411768"
	I0414 16:31:45.430342  157245 main.go:141] libmachine: (addons-411768) Calling .GetMachineName
	I0414 16:31:45.430498  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:45.432908  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.433231  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.433251  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.433377  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:45.433534  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.433683  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.433799  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:45.433957  157245 main.go:141] libmachine: Using SSH client type: native
	I0414 16:31:45.434153  157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0414 16:31:45.434165  157245 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-411768 && echo "addons-411768" | sudo tee /etc/hostname
	I0414 16:31:45.546961  157245 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-411768
	
	I0414 16:31:45.546989  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:45.549613  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.549979  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.550005  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.550211  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:45.550400  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.550549  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.550691  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:45.550963  157245 main.go:141] libmachine: Using SSH client type: native
	I0414 16:31:45.551212  157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0414 16:31:45.551239  157245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-411768' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-411768/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-411768' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 16:31:45.657525  157245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 16:31:45.657555  157245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 16:31:45.657577  157245 buildroot.go:174] setting up certificates
	I0414 16:31:45.657589  157245 provision.go:84] configureAuth start
	I0414 16:31:45.657603  157245 main.go:141] libmachine: (addons-411768) Calling .GetMachineName
	I0414 16:31:45.657842  157245 main.go:141] libmachine: (addons-411768) Calling .GetIP
	I0414 16:31:45.660375  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.660769  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.660803  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.660944  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:45.663138  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.663489  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.663522  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.663601  157245 provision.go:143] copyHostCerts
	I0414 16:31:45.663666  157245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 16:31:45.663831  157245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 16:31:45.663909  157245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 16:31:45.663967  157245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.addons-411768 san=[127.0.0.1 192.168.39.237 addons-411768 localhost minikube]
	I0414 16:31:45.795299  157245 provision.go:177] copyRemoteCerts
	I0414 16:31:45.795360  157245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 16:31:45.795383  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:45.797898  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.798191  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.798220  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.798360  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:45.798539  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.798662  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:45.798789  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:31:45.879801  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 16:31:45.902404  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 16:31:45.925084  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 16:31:45.949044  157245 provision.go:87] duration metric: took 291.438071ms to configureAuth
	I0414 16:31:45.949068  157245 buildroot.go:189] setting minikube options for container-runtime
	I0414 16:31:45.949221  157245 config.go:182] Loaded profile config "addons-411768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 16:31:45.949309  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:45.951682  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.951981  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:45.952011  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:45.952166  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:45.952333  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.952467  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:45.952638  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:45.952795  157245 main.go:141] libmachine: Using SSH client type: native
	I0414 16:31:45.953094  157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0414 16:31:45.953120  157245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 16:31:46.166457  157245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 16:31:46.166488  157245 main.go:141] libmachine: Checking connection to Docker...
	I0414 16:31:46.166495  157245 main.go:141] libmachine: (addons-411768) Calling .GetURL
	I0414 16:31:46.167769  157245 main.go:141] libmachine: (addons-411768) DBG | using libvirt version 6000000
	I0414 16:31:46.170175  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.170508  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:46.170536  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.170657  157245 main.go:141] libmachine: Docker is up and running!
	I0414 16:31:46.170673  157245 main.go:141] libmachine: Reticulating splines...
	I0414 16:31:46.170682  157245 client.go:171] duration metric: took 27.925654468s to LocalClient.Create
	I0414 16:31:46.170701  157245 start.go:167] duration metric: took 27.925716628s to libmachine.API.Create "addons-411768"
	I0414 16:31:46.170712  157245 start.go:293] postStartSetup for "addons-411768" (driver="kvm2")
	I0414 16:31:46.170722  157245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 16:31:46.170739  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:31:46.170954  157245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 16:31:46.170978  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:46.172841  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.173138  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:46.173167  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.173288  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:46.173451  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:46.173601  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:46.173740  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:31:46.257790  157245 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 16:31:46.261866  157245 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 16:31:46.261882  157245 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 16:31:46.261943  157245 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 16:31:46.261966  157245 start.go:296] duration metric: took 91.249009ms for postStartSetup
	I0414 16:31:46.262012  157245 main.go:141] libmachine: (addons-411768) Calling .GetConfigRaw
	I0414 16:31:46.262500  157245 main.go:141] libmachine: (addons-411768) Calling .GetIP
	I0414 16:31:46.264919  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.265236  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:46.265265  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.265442  157245 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/config.json ...
	I0414 16:31:46.265640  157245 start.go:128] duration metric: took 28.037232893s to createHost
	I0414 16:31:46.265670  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:46.267566  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.267838  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:46.267867  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.268045  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:46.268217  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:46.268328  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:46.268470  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:46.268610  157245 main.go:141] libmachine: Using SSH client type: native
	I0414 16:31:46.268804  157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0414 16:31:46.268814  157245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 16:31:46.370629  157245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744648306.342390252
	
	I0414 16:31:46.370658  157245 fix.go:216] guest clock: 1744648306.342390252
	I0414 16:31:46.370667  157245 fix.go:229] Guest: 2025-04-14 16:31:46.342390252 +0000 UTC Remote: 2025-04-14 16:31:46.2656559 +0000 UTC m=+28.135423600 (delta=76.734352ms)
	I0414 16:31:46.370702  157245 fix.go:200] guest clock delta is within tolerance: 76.734352ms
	I0414 16:31:46.370708  157245 start.go:83] releasing machines lock for "addons-411768", held for 28.142389503s
	I0414 16:31:46.370730  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:31:46.370983  157245 main.go:141] libmachine: (addons-411768) Calling .GetIP
	I0414 16:31:46.373400  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.373735  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:46.373764  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.373943  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:31:46.374447  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:31:46.374620  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:31:46.374730  157245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 16:31:46.374787  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:46.374822  157245 ssh_runner.go:195] Run: cat /version.json
	I0414 16:31:46.374847  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:31:46.377238  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.377425  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.377601  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:46.377630  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.377755  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:46.377823  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:46.377875  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:46.377930  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:46.378023  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:31:46.378104  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:46.378172  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:31:46.378227  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:31:46.378328  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:31:46.378462  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:31:46.480700  157245 ssh_runner.go:195] Run: systemctl --version
	I0414 16:31:46.486242  157245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 16:31:46.645308  157245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 16:31:46.651524  157245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 16:31:46.651581  157245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 16:31:46.669868  157245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 16:31:46.669896  157245 start.go:495] detecting cgroup driver to use...
	I0414 16:31:46.669992  157245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 16:31:46.685692  157245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 16:31:46.700090  157245 docker.go:217] disabling cri-docker service (if available) ...
	I0414 16:31:46.700150  157245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 16:31:46.713710  157245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 16:31:46.726966  157245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 16:31:46.838767  157245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 16:31:46.979989  157245 docker.go:233] disabling docker service ...
	I0414 16:31:46.980067  157245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 16:31:47.003477  157245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 16:31:47.016069  157245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 16:31:47.150424  157245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 16:31:47.287057  157245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 16:31:47.300548  157245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 16:31:47.318834  157245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 16:31:47.318902  157245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 16:31:47.329716  157245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 16:31:47.329783  157245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 16:31:47.340624  157245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 16:31:47.351334  157245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 16:31:47.362121  157245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 16:31:47.373147  157245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 16:31:47.383994  157245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 16:31:47.401225  157245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 16:31:47.411919  157245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 16:31:47.421523  157245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 16:31:47.421583  157245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 16:31:47.434300  157245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 16:31:47.444176  157245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 16:31:47.564481  157245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 16:31:47.663689  157245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 16:31:47.663772  157245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 16:31:47.668684  157245 start.go:563] Will wait 60s for crictl version
	I0414 16:31:47.668749  157245 ssh_runner.go:195] Run: which crictl
	I0414 16:31:47.672296  157245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 16:31:47.711027  157245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 16:31:47.711150  157245 ssh_runner.go:195] Run: crio --version
	I0414 16:31:47.738177  157245 ssh_runner.go:195] Run: crio --version
	I0414 16:31:47.765363  157245 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 16:31:47.766624  157245 main.go:141] libmachine: (addons-411768) Calling .GetIP
	I0414 16:31:47.769252  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:47.769571  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:31:47.769595  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:31:47.769815  157245 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 16:31:47.773499  157245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 16:31:47.785608  157245 kubeadm.go:883] updating cluster {Name:addons-411768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-411768 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 16:31:47.785697  157245 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 16:31:47.785734  157245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 16:31:47.817308  157245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 16:31:47.817385  157245 ssh_runner.go:195] Run: which lz4
	I0414 16:31:47.821171  157245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 16:31:47.825038  157245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 16:31:47.825061  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 16:31:49.106194  157245 crio.go:462] duration metric: took 1.285048967s to copy over tarball
	I0414 16:31:49.106305  157245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 16:31:51.232325  157245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125982911s)
	I0414 16:31:51.232352  157245 crio.go:469] duration metric: took 2.126115401s to extract the tarball
	I0414 16:31:51.232360  157245 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 16:31:51.270529  157245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 16:31:51.317395  157245 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 16:31:51.317423  157245 cache_images.go:84] Images are preloaded, skipping loading
	I0414 16:31:51.317433  157245 kubeadm.go:934] updating node { 192.168.39.237 8443 v1.32.2 crio true true} ...
	I0414 16:31:51.317541  157245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-411768 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-411768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 16:31:51.317619  157245 ssh_runner.go:195] Run: crio config
	I0414 16:31:51.361120  157245 cni.go:84] Creating CNI manager for ""
	I0414 16:31:51.361147  157245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 16:31:51.361161  157245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 16:31:51.361180  157245 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-411768 NodeName:addons-411768 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 16:31:51.361284  157245 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-411768"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.237"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 16:31:51.361344  157245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 16:31:51.371245  157245 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 16:31:51.371304  157245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 16:31:51.380591  157245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0414 16:31:51.396369  157245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 16:31:51.412364  157245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0414 16:31:51.428617  157245 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0414 16:31:51.432186  157245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 16:31:51.443795  157245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 16:31:51.559741  157245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 16:31:51.576587  157245 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768 for IP: 192.168.39.237
	I0414 16:31:51.576605  157245 certs.go:194] generating shared ca certs ...
	I0414 16:31:51.576623  157245 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:51.576752  157245 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 16:31:51.816547  157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt ...
	I0414 16:31:51.816576  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt: {Name:mkc05f9e104f16e9c207e08de1afcb71287bd637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:51.816738  157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key ...
	I0414 16:31:51.816749  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key: {Name:mka0d07948e661ee7d0f8d239e748e8feff38fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:51.816815  157245 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 16:31:52.100992  157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt ...
	I0414 16:31:52.101023  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt: {Name:mkb8bfb6bf2583abdbb8dc4b7b2d17a28691dcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:52.101182  157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key ...
	I0414 16:31:52.101194  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key: {Name:mkccf03a6725626a3d35aa22dfe62e64adc6f399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:52.101260  157245 certs.go:256] generating profile certs ...
	I0414 16:31:52.101312  157245 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.key
	I0414 16:31:52.101328  157245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt with IP's: []
	I0414 16:31:52.291999  157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt ...
	I0414 16:31:52.292035  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: {Name:mk46b1618752311b5a3a60b53139a9d22b4ec008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:52.292215  157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.key ...
	I0414 16:31:52.292227  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.key: {Name:mk0d1f2e4314112bb8b11a664cebc5e9f9dd4aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:52.292304  157245 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key.741b7197
	I0414 16:31:52.292324  157245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt.741b7197 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
	I0414 16:31:52.452776  157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt.741b7197 ...
	I0414 16:31:52.452809  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt.741b7197: {Name:mkc8fd882d9fc7f1325037b403b47e35b8b745d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:52.452980  157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key.741b7197 ...
	I0414 16:31:52.452993  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key.741b7197: {Name:mk7af8d7412836b3cabc64c25d860df9716a89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:52.453071  157245 certs.go:381] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt.741b7197 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt
	I0414 16:31:52.453172  157245 certs.go:385] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key.741b7197 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key
	I0414 16:31:52.453235  157245 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.key
	I0414 16:31:52.453256  157245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.crt with IP's: []
	I0414 16:31:52.909585  157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.crt ...
	I0414 16:31:52.909620  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.crt: {Name:mk30d90411fde7688f22a78c7b25d39671d22af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:52.909786  157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.key ...
	I0414 16:31:52.909799  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.key: {Name:mk4dd2fda4fc19d3541d44d291b72ed4affa80b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:31:52.910035  157245 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 16:31:52.910078  157245 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 16:31:52.910106  157245 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 16:31:52.910131  157245 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 16:31:52.910673  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 16:31:52.938330  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 16:31:52.962684  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 16:31:52.986413  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 16:31:53.010120  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 16:31:53.033660  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 16:31:53.057129  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 16:31:53.080122  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 16:31:53.112722  157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 16:31:53.141259  157245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 16:31:53.161383  157245 ssh_runner.go:195] Run: openssl version
	I0414 16:31:53.167380  157245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 16:31:53.177593  157245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 16:31:53.181814  157245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 16:31:53.181882  157245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 16:31:53.187497  157245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 16:31:53.197397  157245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 16:31:53.201753  157245 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 16:31:53.201802  157245 kubeadm.go:392] StartCluster: {Name:addons-411768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-411768 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 16:31:53.201905  157245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 16:31:53.201943  157245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 16:31:53.242352  157245 cri.go:89] found id: ""
	I0414 16:31:53.242423  157245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 16:31:53.251899  157245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 16:31:53.261098  157245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 16:31:53.270416  157245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 16:31:53.270438  157245 kubeadm.go:157] found existing configuration files:
	
	I0414 16:31:53.270482  157245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 16:31:53.278904  157245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 16:31:53.278948  157245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 16:31:53.287548  157245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 16:31:53.303510  157245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 16:31:53.303571  157245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 16:31:53.314813  157245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 16:31:53.323856  157245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 16:31:53.323921  157245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 16:31:53.333173  157245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 16:31:53.341922  157245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 16:31:53.341978  157245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 16:31:53.351231  157245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 16:31:53.402096  157245 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 16:31:53.402178  157245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 16:31:53.502653  157245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 16:31:53.502817  157245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 16:31:53.502948  157245 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 16:31:53.510466  157245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 16:31:53.743764  157245 out.go:235]   - Generating certificates and keys ...
	I0414 16:31:53.743887  157245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 16:31:53.743943  157245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 16:31:53.744034  157245 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 16:31:53.971856  157245 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 16:31:54.101338  157245 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 16:31:54.448455  157245 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 16:31:54.791191  157245 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 16:31:54.791323  157245 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-411768 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0414 16:31:54.955997  157245 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 16:31:54.956217  157245 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-411768 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0414 16:31:55.014848  157245 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 16:31:55.105404  157245 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 16:31:55.220389  157245 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 16:31:55.220487  157245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 16:31:55.314104  157245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 16:31:55.507982  157245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 16:31:55.788092  157245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 16:31:56.132658  157245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 16:31:56.241858  157245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 16:31:56.242049  157245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 16:31:56.244266  157245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 16:31:56.245941  157245 out.go:235]   - Booting up control plane ...
	I0414 16:31:56.246023  157245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 16:31:56.246090  157245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 16:31:56.246547  157245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 16:31:56.262347  157245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 16:31:56.268582  157245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 16:31:56.268638  157245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 16:31:56.396463  157245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 16:31:56.396586  157245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 16:31:56.897913  157245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995015ms
	I0414 16:31:56.898020  157245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 16:32:01.397257  157245 kubeadm.go:310] [api-check] The API server is healthy after 4.501396666s
	I0414 16:32:01.408244  157245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 16:32:01.420893  157245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 16:32:01.445113  157245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 16:32:01.445371  157245 kubeadm.go:310] [mark-control-plane] Marking the node addons-411768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 16:32:01.459888  157245 kubeadm.go:310] [bootstrap-token] Using token: ajtwy5.kapiw6da3r2hdoce
	I0414 16:32:01.461000  157245 out.go:235]   - Configuring RBAC rules ...
	I0414 16:32:01.461143  157245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 16:32:01.464927  157245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 16:32:01.470514  157245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 16:32:01.473470  157245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 16:32:01.478503  157245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 16:32:01.481039  157245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 16:32:01.803030  157245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 16:32:02.243354  157245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 16:32:02.803423  157245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 16:32:02.804155  157245 kubeadm.go:310] 
	I0414 16:32:02.804217  157245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 16:32:02.804222  157245 kubeadm.go:310] 
	I0414 16:32:02.804322  157245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 16:32:02.804332  157245 kubeadm.go:310] 
	I0414 16:32:02.804366  157245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 16:32:02.804443  157245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 16:32:02.804527  157245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 16:32:02.804545  157245 kubeadm.go:310] 
	I0414 16:32:02.804604  157245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 16:32:02.804613  157245 kubeadm.go:310] 
	I0414 16:32:02.804687  157245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 16:32:02.804698  157245 kubeadm.go:310] 
	I0414 16:32:02.804743  157245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 16:32:02.804808  157245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 16:32:02.804890  157245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 16:32:02.804898  157245 kubeadm.go:310] 
	I0414 16:32:02.805007  157245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 16:32:02.805117  157245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 16:32:02.805128  157245 kubeadm.go:310] 
	I0414 16:32:02.805216  157245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ajtwy5.kapiw6da3r2hdoce \
	I0414 16:32:02.805365  157245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 16:32:02.805397  157245 kubeadm.go:310] 	--control-plane 
	I0414 16:32:02.805404  157245 kubeadm.go:310] 
	I0414 16:32:02.805491  157245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 16:32:02.805499  157245 kubeadm.go:310] 
	I0414 16:32:02.805599  157245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ajtwy5.kapiw6da3r2hdoce \
	I0414 16:32:02.805758  157245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 16:32:02.806631  157245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 16:32:02.806651  157245 cni.go:84] Creating CNI manager for ""
	I0414 16:32:02.806658  157245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 16:32:02.808086  157245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 16:32:02.809134  157245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 16:32:02.822996  157245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 16:32:02.840888  157245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 16:32:02.840991  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:02.841016  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-411768 minikube.k8s.io/updated_at=2025_04_14T16_32_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=addons-411768 minikube.k8s.io/primary=true
	I0414 16:32:02.985991  157245 ops.go:34] apiserver oom_adj: -16
	I0414 16:32:02.986098  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:03.487047  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:03.986223  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:04.486993  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:04.986829  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:05.486806  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:05.986582  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:06.487112  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:06.986262  157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 16:32:07.077673  157245 kubeadm.go:1113] duration metric: took 4.236743459s to wait for elevateKubeSystemPrivileges
	I0414 16:32:07.077717  157245 kubeadm.go:394] duration metric: took 13.87591877s to StartCluster
	I0414 16:32:07.077741  157245 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:32:07.077896  157245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 16:32:07.078346  157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 16:32:07.078532  157245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 16:32:07.078574  157245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 16:32:07.078665  157245 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0414 16:32:07.078772  157245 config.go:182] Loaded profile config "addons-411768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 16:32:07.078790  157245 addons.go:69] Setting yakd=true in profile "addons-411768"
	I0414 16:32:07.078808  157245 addons.go:69] Setting ingress-dns=true in profile "addons-411768"
	I0414 16:32:07.078834  157245 addons.go:69] Setting metrics-server=true in profile "addons-411768"
	I0414 16:32:07.078840  157245 addons.go:69] Setting storage-provisioner=true in profile "addons-411768"
	I0414 16:32:07.078843  157245 addons.go:238] Setting addon ingress-dns=true in "addons-411768"
	I0414 16:32:07.078853  157245 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-411768"
	I0414 16:32:07.078861  157245 addons.go:69] Setting registry=true in profile "addons-411768"
	I0414 16:32:07.078867  157245 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-411768"
	I0414 16:32:07.078883  157245 addons.go:238] Setting addon registry=true in "addons-411768"
	I0414 16:32:07.078888  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.078910  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.078914  157245 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-411768"
	I0414 16:32:07.078953  157245 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-411768"
	I0414 16:32:07.078934  157245 addons.go:69] Setting volcano=true in profile "addons-411768"
	I0414 16:32:07.078989  157245 addons.go:238] Setting addon volcano=true in "addons-411768"
	I0414 16:32:07.079004  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.079014  157245 addons.go:69] Setting default-storageclass=true in profile "addons-411768"
	I0414 16:32:07.079051  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.079059  157245 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-411768"
	I0414 16:32:07.079074  157245 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-411768"
	I0414 16:32:07.079094  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.079135  157245 addons.go:69] Setting cloud-spanner=true in profile "addons-411768"
	I0414 16:32:07.079179  157245 addons.go:238] Setting addon cloud-spanner=true in "addons-411768"
	I0414 16:32:07.079288  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.079309  157245 addons.go:69] Setting ingress=true in profile "addons-411768"
	I0414 16:32:07.079330  157245 addons.go:238] Setting addon ingress=true in "addons-411768"
	I0414 16:32:07.079367  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.079442  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.079480  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.079496  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.079513  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.079052  157245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-411768"
	I0414 16:32:07.078825  157245 addons.go:69] Setting inspektor-gadget=true in profile "addons-411768"
	I0414 16:32:07.079540  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.079553  157245 addons.go:238] Setting addon inspektor-gadget=true in "addons-411768"
	I0414 16:32:07.079553  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.079564  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.079567  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.079577  157245 addons.go:69] Setting gcp-auth=true in profile "addons-411768"
	I0414 16:32:07.079582  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.079595  157245 mustload.go:65] Loading cluster: addons-411768
	I0414 16:32:07.079615  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.078853  157245 addons.go:238] Setting addon storage-provisioner=true in "addons-411768"
	I0414 16:32:07.078845  157245 addons.go:238] Setting addon metrics-server=true in "addons-411768"
	I0414 16:32:07.079291  157245 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-411768"
	I0414 16:32:07.078815  157245 addons.go:238] Setting addon yakd=true in "addons-411768"
	I0414 16:32:07.079701  157245 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-411768"
	I0414 16:32:07.079538  157245 addons.go:69] Setting volumesnapshots=true in profile "addons-411768"
	I0414 16:32:07.079747  157245 addons.go:238] Setting addon volumesnapshots=true in "addons-411768"
	I0414 16:32:07.079788  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.079797  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.079922  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.079945  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.079965  157245 config.go:182] Loaded profile config "addons-411768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 16:32:07.079976  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.080017  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.080140  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.080154  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.080160  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.080166  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.080170  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.080172  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.080191  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.080193  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.080236  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.080248  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.080320  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.080324  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.080353  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.080354  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.080561  157245 out.go:177] * Verifying Kubernetes components...
	I0414 16:32:07.082053  157245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 16:32:07.100792  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I0414 16:32:07.100984  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35649
	I0414 16:32:07.101323  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
	I0414 16:32:07.101508  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.101558  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.101974  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.102128  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.102149  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.102535  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.102551  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0414 16:32:07.103008  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.103028  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.103053  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.103319  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.103012  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.103157  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.103397  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.103361  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.104001  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.104028  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45921
	I0414 16:32:07.104229  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.104244  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.104974  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.106159  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.106207  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.106287  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.106319  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.106448  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.106497  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.106563  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.106588  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.106786  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.106814  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.106924  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.106964  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.107406  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.107444  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.107481  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.107575  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0414 16:32:07.108205  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.108226  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.108890  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.108989  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.109580  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.109599  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.109973  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.110985  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.111019  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.139383  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0414 16:32:07.140059  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.140111  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.140376  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0414 16:32:07.141086  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.141228  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.141915  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.141937  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.142497  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.142873  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.147010  157245 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-411768"
	I0414 16:32:07.147060  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.147527  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.147571  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.147823  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0414 16:32:07.148696  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.148936  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.148958  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.149379  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.149559  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.149586  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.149906  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.150643  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41571
	I0414 16:32:07.150878  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.151083  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.151274  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.151842  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.152109  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:07.152136  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:07.152375  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:07.152389  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:07.152409  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:07.152468  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:07.152563  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I0414 16:32:07.153139  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0414 16:32:07.153563  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.153664  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0414 16:32:07.154008  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.154215  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.154426  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.154447  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.154451  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:07.154486  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:07.154504  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	W0414 16:32:07.154646  157245 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0414 16:32:07.154793  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.154827  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.154899  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.155198  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.155525  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0414 16:32:07.155603  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.155628  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.155685  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.156374  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.156438  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.156522  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.156548  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.156939  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.157012  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.157230  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.157649  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.157676  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.166544  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.166595  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.167078  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.167221  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
	I0414 16:32:07.168299  157245 addons.go:238] Setting addon default-storageclass=true in "addons-411768"
	I0414 16:32:07.168337  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:07.168737  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.168761  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.169027  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.169440  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.169469  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.170052  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.170084  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.170108  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.170124  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.170571  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I0414 16:32:07.170710  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.170768  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.170966  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.171398  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.171433  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.171607  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.171739  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37227
	I0414 16:32:07.172096  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.172116  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.172628  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.173234  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.173276  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.173649  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.173864  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.174302  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.174318  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.174464  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I0414 16:32:07.174901  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.174982  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.175521  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.175540  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.175605  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0414 16:32:07.175657  157245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0414 16:32:07.176200  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.176421  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.177749  157245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 16:32:07.178217  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.178263  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.178474  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.179207  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.179906  157245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 16:32:07.180518  157245 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0414 16:32:07.181218  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.181240  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.181498  157245 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 16:32:07.181519  157245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 16:32:07.181538  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.181814  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.181864  157245 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 16:32:07.181879  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0414 16:32:07.181895  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.182055  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.184510  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.186720  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0414 16:32:07.187062  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.187251  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.187505  157245 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0414 16:32:07.187748  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.187764  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.188252  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.188272  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.188315  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.188511  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.189052  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.189775  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.190164  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.190286  157245 out.go:177]   - Using image docker.io/registry:2.8.3
	I0414 16:32:07.190788  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.190982  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.190985  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.191008  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.191206  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.191582  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0414 16:32:07.191662  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0414 16:32:07.191696  157245 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0414 16:32:07.191714  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0414 16:32:07.191731  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.191797  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.192031  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.192638  157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0414 16:32:07.192655  157245 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0414 16:32:07.192672  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.192734  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
	I0414 16:32:07.192733  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.193486  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.193581  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.194125  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.194147  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.194605  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.195441  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.195492  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.197319  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.197626  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.197727  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.197760  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.198074  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.198172  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.198425  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.198441  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.198465  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.198484  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.198678  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.198695  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.198891  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.198901  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.199108  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.200232  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0414 16:32:07.200716  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.200737  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.200828  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.201113  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.201433  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.201460  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.201811  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.201889  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.202096  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.203805  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.205350  157245 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0414 16:32:07.206613  157245 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 16:32:07.206633  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0414 16:32:07.206650  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.210509  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.210985  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.211006  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.211198  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.211363  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.211534  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.211599  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0414 16:32:07.211973  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.212415  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.212839  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.212874  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.213060  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0414 16:32:07.213269  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.213431  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.213434  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.214366  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.214408  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.215015  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.215104  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I0414 16:32:07.215269  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.215611  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0414 16:32:07.215662  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.215770  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0414 16:32:07.216109  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.216207  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.216247  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.216465  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.216837  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.217020  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.217173  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.217361  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.217521  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.217548  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.217677  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.217697  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.218192  157245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 16:32:07.218203  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.218396  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0414 16:32:07.218677  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.218779  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.218818  157245 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0414 16:32:07.219031  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.219082  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.219246  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.219360  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.219380  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.219389  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.219879  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.219919  157245 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0414 16:32:07.219939  157245 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0414 16:32:07.219957  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.220083  157245 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 16:32:07.220101  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 16:32:07.220116  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.220458  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:07.220511  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:07.221225  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0414 16:32:07.222073  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.223405  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0414 16:32:07.223418  157245 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0414 16:32:07.224216  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.224492  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.224521  157245 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 16:32:07.224540  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0414 16:32:07.224556  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.224655  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.224665  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.224838  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.224887  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.224935  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.225054  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.225090  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.225234  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.225350  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.225487  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.225480  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.225632  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.225733  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0414 16:32:07.226989  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0414 16:32:07.227980  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0414 16:32:07.228203  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.228578  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.228591  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.228784  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.228967  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.229056  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.229171  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.230056  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0414 16:32:07.231280  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0414 16:32:07.232584  157245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0414 16:32:07.234202  157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0414 16:32:07.234220  157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0414 16:32:07.234240  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.237573  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.238511  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.238533  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.238730  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.238913  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.239093  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.239107  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0414 16:32:07.239287  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.239509  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.239916  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.239938  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.240346  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.240551  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.242582  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.244043  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0414 16:32:07.244097  157245 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0414 16:32:07.244568  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.245058  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.245084  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.245268  157245 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0414 16:32:07.245283  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0414 16:32:07.245300  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.245448  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.245618  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.247437  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0414 16:32:07.248054  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.248448  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36205
	I0414 16:32:07.248483  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.248625  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.248647  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.248721  157245 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 16:32:07.248736  157245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 16:32:07.248753  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.248778  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0414 16:32:07.249095  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.249150  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.249553  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:07.249682  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.249908  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.250096  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.250114  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.250215  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.250233  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.250313  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.250447  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.250547  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.250645  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.250933  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.251166  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:07.251178  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:07.251231  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.251689  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.251748  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:07.251949  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:07.252944  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.253356  157245 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0414 16:32:07.253491  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.253511  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.253559  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.253612  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.253755  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.253822  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:07.253904  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.254079  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.254626  157245 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 16:32:07.254646  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0414 16:32:07.254662  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.255226  157245 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0414 16:32:07.255367  157245 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0414 16:32:07.256210  157245 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0414 16:32:07.256230  157245 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0414 16:32:07.256248  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.257502  157245 out.go:177]   - Using image docker.io/busybox:stable
	I0414 16:32:07.257586  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.258106  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.258158  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.258374  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.258605  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.258625  157245 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 16:32:07.258637  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0414 16:32:07.258659  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:07.258745  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.258912  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.259601  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.260048  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.260074  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.260218  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	W0414 16:32:07.260270  157245 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36916->192.168.39.237:22: read: connection reset by peer
	I0414 16:32:07.260379  157245 retry.go:31] will retry after 256.435878ms: ssh: handshake failed: read tcp 192.168.39.1:36916->192.168.39.237:22: read: connection reset by peer
	I0414 16:32:07.260594  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.260750  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.260958  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.261482  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.261902  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:07.261930  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:07.262043  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:07.262205  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:07.262341  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:07.262477  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:07.510204  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 16:32:07.535405  157245 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0414 16:32:07.535428  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0414 16:32:07.554124  157245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 16:32:07.554180  157245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 16:32:07.590129  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0414 16:32:07.597474  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 16:32:07.649867  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 16:32:07.681091  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0414 16:32:07.758590  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 16:32:07.769413  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 16:32:07.785004  157245 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 16:32:07.785026  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0414 16:32:07.801970  157245 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0414 16:32:07.801990  157245 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0414 16:32:07.901876  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 16:32:07.905329  157245 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0414 16:32:07.905346  157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0414 16:32:07.959073  157245 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0414 16:32:07.959099  157245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0414 16:32:08.000884  157245 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 16:32:08.000913  157245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 16:32:08.003809  157245 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0414 16:32:08.003828  157245 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0414 16:32:08.057722  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 16:32:08.080964  157245 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0414 16:32:08.080985  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0414 16:32:08.152996  157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0414 16:32:08.153022  157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0414 16:32:08.237587  157245 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0414 16:32:08.237623  157245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0414 16:32:08.278538  157245 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 16:32:08.278567  157245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 16:32:08.309584  157245 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0414 16:32:08.309611  157245 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0414 16:32:08.351902  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0414 16:32:08.360705  157245 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0414 16:32:08.360724  157245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0414 16:32:08.399688  157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0414 16:32:08.399711  157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0414 16:32:08.587012  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 16:32:08.592210  157245 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0414 16:32:08.592232  157245 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0414 16:32:08.604465  157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0414 16:32:08.604483  157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0414 16:32:08.609658  157245 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0414 16:32:08.609676  157245 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0414 16:32:08.816206  157245 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0414 16:32:08.816228  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0414 16:32:08.899853  157245 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 16:32:08.899885  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0414 16:32:08.947672  157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0414 16:32:08.947697  157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0414 16:32:09.017802  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0414 16:32:09.179922  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.669675177s)
	I0414 16:32:09.179969  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:09.179982  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:09.180348  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:09.180371  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:09.180426  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:09.180457  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:09.180470  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:09.180779  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:09.180779  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:09.180810  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:09.255913  157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0414 16:32:09.255935  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0414 16:32:09.275044  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 16:32:09.425145  157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0414 16:32:09.425176  157245 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0414 16:32:09.695982  157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0414 16:32:09.696008  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0414 16:32:09.975776  157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0414 16:32:09.975807  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0414 16:32:10.172563  157245 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.618340999s)
	I0414 16:32:10.172608  157245 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 16:32:10.172610  157245 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.618454484s)
	I0414 16:32:10.173257  157245 node_ready.go:35] waiting up to 6m0s for node "addons-411768" to be "Ready" ...
	I0414 16:32:10.181456  157245 node_ready.go:49] node "addons-411768" has status "Ready":"True"
	I0414 16:32:10.181473  157245 node_ready.go:38] duration metric: took 8.190521ms for node "addons-411768" to be "Ready" ...
	I0414 16:32:10.181481  157245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 16:32:10.197447  157245 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:10.476856  157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 16:32:10.476890  157245 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0414 16:32:10.694797  157245 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-411768" context rescaled to 1 replicas
	I0414 16:32:10.812006  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 16:32:11.763503  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.173337705s)
	I0414 16:32:11.763552  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:11.763564  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:11.763929  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:11.764027  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:11.764045  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:11.764052  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:11.764056  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:11.764342  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:11.764386  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:11.764416  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:12.214485  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:14.103020  157245 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0414 16:32:14.103060  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:14.106672  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:14.107169  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:14.107212  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:14.107372  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:14.107542  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:14.107674  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:14.107803  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:14.256447  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:14.510568  157245 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0414 16:32:14.541375  157245 addons.go:238] Setting addon gcp-auth=true in "addons-411768"
	I0414 16:32:14.541425  157245 host.go:66] Checking if "addons-411768" exists ...
	I0414 16:32:14.541750  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:14.541776  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:14.557041  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0414 16:32:14.557516  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:14.557977  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:14.558000  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:14.558349  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:14.558968  157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:32:14.559036  157245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:32:14.574691  157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
	I0414 16:32:14.575221  157245 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:32:14.575692  157245 main.go:141] libmachine: Using API Version  1
	I0414 16:32:14.575711  157245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:32:14.576075  157245 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:32:14.576279  157245 main.go:141] libmachine: (addons-411768) Calling .GetState
	I0414 16:32:14.577929  157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
	I0414 16:32:14.578171  157245 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0414 16:32:14.578199  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
	I0414 16:32:14.580687  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:14.581036  157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
	I0414 16:32:14.581073  157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
	I0414 16:32:14.581207  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
	I0414 16:32:14.581385  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
	I0414 16:32:14.581540  157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
	I0414 16:32:14.581712  157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
	I0414 16:32:15.941800  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.344287225s)
	I0414 16:32:15.941856  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.941876  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.941938  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.2920377s)
	I0414 16:32:15.942010  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.260894337s)
	I0414 16:32:15.942029  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942040  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942053  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942042  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942098  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.183462668s)
	I0414 16:32:15.942132  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942142  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942201  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.942216  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.942231  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942243  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942273  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.17283585s)
	I0414 16:32:15.942299  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942311  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942366  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.040465575s)
	I0414 16:32:15.942381  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942388  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942405  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.942429  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.942451  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.884706718s)
	I0414 16:32:15.942458  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.942460  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.942466  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942467  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.942474  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942478  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942482  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.942485  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942489  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.942499  157245 addons.go:479] Verifying addon ingress=true in "addons-411768"
	I0414 16:32:15.942531  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.590606213s)
	I0414 16:32:15.942553  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942571  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942686  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.355641837s)
	I0414 16:32:15.942700  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942707  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942722  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.942753  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.942761  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.942778  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.924946651s)
	I0414 16:32:15.942791  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942792  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.942800  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942823  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.942830  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.942838  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.942845  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.942917  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.667841574s)
	W0414 16:32:15.942972  157245 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 16:32:15.942998  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.943004  157245 retry.go:31] will retry after 326.879938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 16:32:15.943028  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.943036  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.943043  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.943068  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.943080  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.943097  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.943103  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.943141  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.943045  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.943310  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.943336  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.943343  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.944483  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.944510  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.944518  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.944526  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.944533  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.944598  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.944615  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.944621  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.944627  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.944633  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.944665  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.944680  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.944686  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.944844  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.944864  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.944881  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.944888  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.944894  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.944935  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.944953  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.944959  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.945197  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.945203  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.945218  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.945227  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.945229  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.945235  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.945245  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.945599  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.945628  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.945635  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.946225  157245 out.go:177] * Verifying ingress addon...
	I0414 16:32:15.947010  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.947042  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.947049  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.947057  157245 addons.go:479] Verifying addon metrics-server=true in "addons-411768"
	I0414 16:32:15.947233  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.947240  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.947248  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.947254  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.947967  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.948005  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.948013  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.948021  157245 addons.go:479] Verifying addon registry=true in "addons-411768"
	I0414 16:32:15.948543  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.948575  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.948583  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:15.948870  157245 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0414 16:32:15.949095  157245 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-411768 service yakd-dashboard -n yakd-dashboard
	
	I0414 16:32:15.949141  157245 out.go:177] * Verifying registry addon...
	I0414 16:32:15.951202  157245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0414 16:32:15.963853  157245 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 16:32:15.963876  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:15.963979  157245 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0414 16:32:15.963998  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:15.983788  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.983807  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.984050  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.984084  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	W0414 16:32:15.984204  157245 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0414 16:32:15.990288  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:15.990304  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:15.990587  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:15.990607  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:15.990608  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:16.271070  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 16:32:16.452769  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:16.455283  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:16.710065  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:16.958015  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:16.958454  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:17.438594  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.62650454s)
	I0414 16:32:17.438672  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:17.438680  157245 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.860481626s)
	I0414 16:32:17.438694  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:17.438991  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:17.439056  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:17.439077  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:17.439094  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:17.439104  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:17.439344  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:17.439390  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:17.439413  157245 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-411768"
	I0414 16:32:17.439431  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:17.439788  157245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 16:32:17.441340  157245 out.go:177] * Verifying csi-hostpath-driver addon...
	I0414 16:32:17.442480  157245 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0414 16:32:17.443376  157245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0414 16:32:17.443402  157245 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0414 16:32:17.443419  157245 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0414 16:32:17.454265  157245 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 16:32:17.454280  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:17.476839  157245 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0414 16:32:17.476862  157245 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0414 16:32:17.488320  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:17.488336  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:17.622440  157245 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 16:32:17.622467  157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0414 16:32:17.674372  157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 16:32:17.948045  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:17.951253  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:17.953595  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:18.031475  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.760352096s)
	I0414 16:32:18.031539  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:18.031568  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:18.031852  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:18.031872  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:18.031881  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:18.031889  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:18.032212  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:18.032226  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:18.032241  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:18.448339  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:18.451605  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:18.453396  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:18.839174  157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.164760866s)
	I0414 16:32:18.839229  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:18.839246  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:18.839543  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:18.839562  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:18.839571  157245 main.go:141] libmachine: Making call to close driver server
	I0414 16:32:18.839578  157245 main.go:141] libmachine: (addons-411768) Calling .Close
	I0414 16:32:18.839578  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:18.839856  157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
	I0414 16:32:18.840836  157245 main.go:141] libmachine: Successfully made call to close driver server
	I0414 16:32:18.840877  157245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 16:32:18.841898  157245 addons.go:479] Verifying addon gcp-auth=true in "addons-411768"
	I0414 16:32:18.843365  157245 out.go:177] * Verifying gcp-auth addon...
	I0414 16:32:18.845436  157245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0414 16:32:18.864478  157245 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0414 16:32:18.864493  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:18.951850  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:18.952805  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:18.960513  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:19.207080  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:19.351692  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:19.454048  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:19.454184  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:19.457030  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:19.849673  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:19.952297  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:19.952507  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:19.956382  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:20.354981  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:20.453532  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:20.453589  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:20.454660  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:20.849015  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:20.949921  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:20.952463  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:20.955682  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:21.350320  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:21.448183  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:21.451349  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:21.453773  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:21.703126  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:21.848988  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:21.947094  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:21.952588  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:21.956537  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:22.349307  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:22.448495  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:22.455383  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:22.456667  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:22.850685  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:22.951804  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:22.953054  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:22.954477  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:23.348393  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:23.831401  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:23.831612  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:23.831616  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:23.833655  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:23.848344  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:23.947383  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:23.951562  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:23.954147  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:24.348473  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:24.446610  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:24.451762  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:24.453234  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:24.849156  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:24.949875  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:24.953181  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:24.956772  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:25.348813  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:25.446798  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:25.451886  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:25.453640  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:25.848872  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:25.950188  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:25.953238  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:25.955142  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:26.203268  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:26.349341  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:26.447357  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:26.451683  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:26.453211  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:26.849230  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:27.015308  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:27.015394  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:27.015443  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:27.349136  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:27.449512  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:27.451610  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:27.453332  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:27.848909  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:27.948032  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:27.951917  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:27.953956  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:28.203759  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:28.349313  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:28.447617  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:28.451870  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:28.453350  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:28.848443  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:28.946954  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:28.952351  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:28.954395  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:29.348655  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:29.446252  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:29.451263  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:29.453170  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:29.848582  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:29.946846  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:29.951988  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:29.953548  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:30.353521  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:30.446738  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:30.452330  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:30.453982  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:30.703364  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:30.848685  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:30.946699  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:30.951776  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:30.953401  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:31.348906  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:31.449728  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:31.451427  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:31.453114  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:31.849108  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:31.947377  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:31.951326  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:31.953162  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:32.348980  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:32.446915  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:32.452697  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:32.454386  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:32.849366  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:32.948655  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:32.952217  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:32.953750  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:33.538563  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:33.538620  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:33.538621  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:33.538892  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:33.539710  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:33.848696  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:33.947425  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:33.954704  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:33.955329  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:34.349380  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:34.447237  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:34.451437  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:34.453258  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:34.848112  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:34.946966  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:34.950998  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:34.954352  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:35.348595  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:35.447142  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:35.451238  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:35.454279  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:35.702474  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:35.848824  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:35.950254  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:35.952491  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:35.954402  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:36.359946  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:36.447210  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:36.451285  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:36.454557  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:36.848742  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:36.948697  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:36.954774  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:36.956451  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:37.348672  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:37.448857  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:37.452195  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:37.454174  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:37.703854  157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
	I0414 16:32:37.848946  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:37.947084  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:37.951369  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:37.953671  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:38.348432  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:38.446857  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:38.455079  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:38.455767  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:38.757296  157245 pod_ready.go:93] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"True"
	I0414 16:32:38.757322  157245 pod_ready.go:82] duration metric: took 28.559850903s for pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.757332  157245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4wbtn" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.772514  157245 pod_ready.go:93] pod "coredns-668d6bf9bc-4wbtn" in "kube-system" namespace has status "Ready":"True"
	I0414 16:32:38.772538  157245 pod_ready.go:82] duration metric: took 15.200209ms for pod "coredns-668d6bf9bc-4wbtn" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.772547  157245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qxz94" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.776123  157245 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-qxz94" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-qxz94" not found
	I0414 16:32:38.776157  157245 pod_ready.go:82] duration metric: took 3.601331ms for pod "coredns-668d6bf9bc-qxz94" in "kube-system" namespace to be "Ready" ...
	E0414 16:32:38.776170  157245 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-qxz94" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-qxz94" not found
	I0414 16:32:38.776180  157245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-411768" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.781768  157245 pod_ready.go:93] pod "etcd-addons-411768" in "kube-system" namespace has status "Ready":"True"
	I0414 16:32:38.781787  157245 pod_ready.go:82] duration metric: took 5.596909ms for pod "etcd-addons-411768" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.781795  157245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-411768" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.790734  157245 pod_ready.go:93] pod "kube-apiserver-addons-411768" in "kube-system" namespace has status "Ready":"True"
	I0414 16:32:38.790755  157245 pod_ready.go:82] duration metric: took 8.954041ms for pod "kube-apiserver-addons-411768" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.790778  157245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-411768" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.849118  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:38.900618  157245 pod_ready.go:93] pod "kube-controller-manager-addons-411768" in "kube-system" namespace has status "Ready":"True"
	I0414 16:32:38.900648  157245 pod_ready.go:82] duration metric: took 109.861531ms for pod "kube-controller-manager-addons-411768" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.900661  157245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvpxd" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:38.946501  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:38.951752  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:38.953514  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:39.302985  157245 pod_ready.go:93] pod "kube-proxy-bvpxd" in "kube-system" namespace has status "Ready":"True"
	I0414 16:32:39.303009  157245 pod_ready.go:82] duration metric: took 402.341293ms for pod "kube-proxy-bvpxd" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:39.303023  157245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-411768" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:39.349076  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:39.447049  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:39.451245  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:39.452991  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:39.700762  157245 pod_ready.go:93] pod "kube-scheduler-addons-411768" in "kube-system" namespace has status "Ready":"True"
	I0414 16:32:39.700786  157245 pod_ready.go:82] duration metric: took 397.756511ms for pod "kube-scheduler-addons-411768" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:39.700795  157245 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fqwqf" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:39.848696  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:39.946533  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:39.951892  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:39.953500  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:40.101448  157245 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fqwqf" in "kube-system" namespace has status "Ready":"True"
	I0414 16:32:40.101473  157245 pod_ready.go:82] duration metric: took 400.672099ms for pod "nvidia-device-plugin-daemonset-fqwqf" in "kube-system" namespace to be "Ready" ...
	I0414 16:32:40.101484  157245 pod_ready.go:39] duration metric: took 29.919990415s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 16:32:40.101509  157245 api_server.go:52] waiting for apiserver process to appear ...
	I0414 16:32:40.101578  157245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 16:32:40.120824  157245 api_server.go:72] duration metric: took 33.042211856s to wait for apiserver process to appear ...
	I0414 16:32:40.120853  157245 api_server.go:88] waiting for apiserver healthz status ...
	I0414 16:32:40.120877  157245 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0414 16:32:40.125585  157245 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0414 16:32:40.126714  157245 api_server.go:141] control plane version: v1.32.2
	I0414 16:32:40.126735  157245 api_server.go:131] duration metric: took 5.873926ms to wait for apiserver health ...
	I0414 16:32:40.126745  157245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 16:32:40.303844  157245 system_pods.go:59] 18 kube-system pods found
	I0414 16:32:40.303885  157245 system_pods.go:61] "amd-gpu-device-plugin-5sprs" [36ab44cd-e5cd-47dc-97c9-9b9566809a07] Running
	I0414 16:32:40.303893  157245 system_pods.go:61] "coredns-668d6bf9bc-4wbtn" [efde3561-f910-4083-a045-d58c8fdcf7f5] Running
	I0414 16:32:40.303904  157245 system_pods.go:61] "csi-hostpath-attacher-0" [ed55eafd-36ee-4183-9d67-d584935ba068] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 16:32:40.303912  157245 system_pods.go:61] "csi-hostpath-resizer-0" [1c5ebede-4ffc-4554-98e8-6b877134818e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 16:32:40.303927  157245 system_pods.go:61] "csi-hostpathplugin-mh59q" [b4e6a15d-c481-4c65-8460-c1e3cd4fd26a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 16:32:40.303940  157245 system_pods.go:61] "etcd-addons-411768" [982b0174-0ec5-4ad4-915e-d2ce2e6ac0af] Running
	I0414 16:32:40.303946  157245 system_pods.go:61] "kube-apiserver-addons-411768" [78984217-deb9-4f02-8509-1d209433f3bc] Running
	I0414 16:32:40.303951  157245 system_pods.go:61] "kube-controller-manager-addons-411768" [1a00c32b-242d-4a58-988c-8eeadd7b5e47] Running
	I0414 16:32:40.303956  157245 system_pods.go:61] "kube-ingress-dns-minikube" [d52dc595-cef9-487e-9ae1-d5f31774779b] Running
	I0414 16:32:40.303960  157245 system_pods.go:61] "kube-proxy-bvpxd" [240f2e9d-199b-4666-8144-1af7bb751178] Running
	I0414 16:32:40.303964  157245 system_pods.go:61] "kube-scheduler-addons-411768" [5f5f1249-7d39-44d1-9dc3-046dda9255c1] Running
	I0414 16:32:40.303971  157245 system_pods.go:61] "metrics-server-7fbb699795-s4bdh" [fb315cc6-a736-467a-8f3f-7e48a315f789] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 16:32:40.303977  157245 system_pods.go:61] "nvidia-device-plugin-daemonset-fqwqf" [e5a2d34f-7429-47b0-9239-917c6907123c] Running
	I0414 16:32:40.303985  157245 system_pods.go:61] "registry-6c88467877-5vmwg" [e9f17d14-6916-4171-aba7-15b3d6dab565] Running
	I0414 16:32:40.303992  157245 system_pods.go:61] "registry-proxy-bpsmn" [998c1dc5-a7ac-4e6d-a29f-01c054cb33e9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 16:32:40.304004  157245 system_pods.go:61] "snapshot-controller-68b874b76f-25g84" [ae23e104-95da-40ae-80b9-0400fb264d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 16:32:40.304018  157245 system_pods.go:61] "snapshot-controller-68b874b76f-8gfxk" [08f31c66-3f2c-442e-bed1-d74113220a4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 16:32:40.304027  157245 system_pods.go:61] "storage-provisioner" [016d9cef-9f4d-4edc-9108-2b5b76533cc7] Running
	I0414 16:32:40.304044  157245 system_pods.go:74] duration metric: took 177.290903ms to wait for pod list to return data ...
	I0414 16:32:40.304057  157245 default_sa.go:34] waiting for default service account to be created ...
	I0414 16:32:40.349238  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:40.448144  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:40.451263  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:40.453751  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:40.501406  157245 default_sa.go:45] found service account: "default"
	I0414 16:32:40.501439  157245 default_sa.go:55] duration metric: took 197.36909ms for default service account to be created ...
	I0414 16:32:40.501452  157245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 16:32:40.704542  157245 system_pods.go:86] 18 kube-system pods found
	I0414 16:32:40.704628  157245 system_pods.go:89] "amd-gpu-device-plugin-5sprs" [36ab44cd-e5cd-47dc-97c9-9b9566809a07] Running
	I0414 16:32:40.704652  157245 system_pods.go:89] "coredns-668d6bf9bc-4wbtn" [efde3561-f910-4083-a045-d58c8fdcf7f5] Running
	I0414 16:32:40.704673  157245 system_pods.go:89] "csi-hostpath-attacher-0" [ed55eafd-36ee-4183-9d67-d584935ba068] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 16:32:40.704695  157245 system_pods.go:89] "csi-hostpath-resizer-0" [1c5ebede-4ffc-4554-98e8-6b877134818e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 16:32:40.704725  157245 system_pods.go:89] "csi-hostpathplugin-mh59q" [b4e6a15d-c481-4c65-8460-c1e3cd4fd26a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 16:32:40.704740  157245 system_pods.go:89] "etcd-addons-411768" [982b0174-0ec5-4ad4-915e-d2ce2e6ac0af] Running
	I0414 16:32:40.704755  157245 system_pods.go:89] "kube-apiserver-addons-411768" [78984217-deb9-4f02-8509-1d209433f3bc] Running
	I0414 16:32:40.704773  157245 system_pods.go:89] "kube-controller-manager-addons-411768" [1a00c32b-242d-4a58-988c-8eeadd7b5e47] Running
	I0414 16:32:40.704789  157245 system_pods.go:89] "kube-ingress-dns-minikube" [d52dc595-cef9-487e-9ae1-d5f31774779b] Running
	I0414 16:32:40.704804  157245 system_pods.go:89] "kube-proxy-bvpxd" [240f2e9d-199b-4666-8144-1af7bb751178] Running
	I0414 16:32:40.704815  157245 system_pods.go:89] "kube-scheduler-addons-411768" [5f5f1249-7d39-44d1-9dc3-046dda9255c1] Running
	I0414 16:32:40.704828  157245 system_pods.go:89] "metrics-server-7fbb699795-s4bdh" [fb315cc6-a736-467a-8f3f-7e48a315f789] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 16:32:40.704834  157245 system_pods.go:89] "nvidia-device-plugin-daemonset-fqwqf" [e5a2d34f-7429-47b0-9239-917c6907123c] Running
	I0414 16:32:40.704841  157245 system_pods.go:89] "registry-6c88467877-5vmwg" [e9f17d14-6916-4171-aba7-15b3d6dab565] Running
	I0414 16:32:40.704854  157245 system_pods.go:89] "registry-proxy-bpsmn" [998c1dc5-a7ac-4e6d-a29f-01c054cb33e9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 16:32:40.704878  157245 system_pods.go:89] "snapshot-controller-68b874b76f-25g84" [ae23e104-95da-40ae-80b9-0400fb264d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 16:32:40.704891  157245 system_pods.go:89] "snapshot-controller-68b874b76f-8gfxk" [08f31c66-3f2c-442e-bed1-d74113220a4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 16:32:40.704900  157245 system_pods.go:89] "storage-provisioner" [016d9cef-9f4d-4edc-9108-2b5b76533cc7] Running
	I0414 16:32:40.704918  157245 system_pods.go:126] duration metric: took 203.457402ms to wait for k8s-apps to be running ...
	I0414 16:32:40.704931  157245 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 16:32:40.704991  157245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 16:32:40.744640  157245 system_svc.go:56] duration metric: took 39.697803ms WaitForService to wait for kubelet
	I0414 16:32:40.744678  157245 kubeadm.go:582] duration metric: took 33.666067886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 16:32:40.744702  157245 node_conditions.go:102] verifying NodePressure condition ...
	I0414 16:32:40.848296  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:40.900910  157245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 16:32:40.900950  157245 node_conditions.go:123] node cpu capacity is 2
	I0414 16:32:40.900969  157245 node_conditions.go:105] duration metric: took 156.260423ms to run NodePressure ...
	I0414 16:32:40.900986  157245 start.go:241] waiting for startup goroutines ...
	I0414 16:32:40.948060  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:40.952210  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:40.954330  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:41.349207  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:41.449925  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:41.451895  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:41.454102  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:41.849540  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:41.946801  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:41.952175  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:41.953995  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:42.348874  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:42.447461  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:42.452190  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:42.453732  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:42.848731  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:42.946994  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:42.956352  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:42.956446  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:43.348408  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:43.446293  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:43.451711  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:43.453538  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:43.849130  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:43.947364  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:43.951553  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:43.953132  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:44.348343  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:44.447463  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:44.451687  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:44.453447  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:44.848569  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:44.947455  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:44.951433  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:44.953252  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:45.348062  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:45.447616  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:45.458880  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:45.459552  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:45.849274  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:45.947491  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:45.951883  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:45.953345  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:46.348451  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:46.447192  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:46.451277  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:46.453137  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:46.848083  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:46.949577  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:46.951882  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:46.953946  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:47.349070  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:47.447242  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:47.453140  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:47.460714  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:47.851286  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:47.973211  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:47.973545  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:47.973649  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:48.349452  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:48.446643  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:48.451811  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:48.453443  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:48.849507  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:48.948358  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:48.952345  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:48.953621  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:49.348680  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:49.446684  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:49.451765  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:49.453436  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:49.848769  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:49.949347  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:49.951365  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:49.955536  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:50.348756  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:50.447095  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:50.451217  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:50.454357  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:50.848812  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:50.946721  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:50.951807  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:50.953312  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:51.348748  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:51.449754  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:51.451424  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:51.455933  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 16:32:51.849941  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:51.954605  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:51.954789  157245 kapi.go:107] duration metric: took 36.003587521s to wait for kubernetes.io/minikube-addons=registry ...
	I0414 16:32:51.955284  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:52.349233  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:52.447475  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:52.451848  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:53.172649  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:53.172676  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:53.172696  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:53.349972  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:53.446815  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:53.452137  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:53.849416  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:53.946973  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:53.951898  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:54.349376  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:54.447701  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:54.452200  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:54.849182  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:54.947048  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:54.951010  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:55.349214  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:55.450278  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:55.451912  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:55.848716  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:56.057257  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:56.057395  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:56.348195  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:56.447145  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:56.451650  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:56.850073  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:56.947838  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:56.952376  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:57.349507  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:57.447095  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:57.451199  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:57.849066  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:57.948492  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:57.952815  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:58.350477  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:58.449774  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:58.452698  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:58.848386  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:58.946694  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:58.952283  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:59.348399  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:59.447527  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:32:59.452414  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:59.847990  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:32:59.986489  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:32:59.987975  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:00.349010  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:00.448767  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:00.452053  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:00.849430  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:00.952275  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:00.954676  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:01.349153  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:01.448527  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:01.456254  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:01.848043  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:01.947603  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:01.953318  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:02.349283  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:02.447325  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:02.453051  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:02.855651  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:02.947573  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:02.952588  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:03.348808  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:03.450311  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:03.460697  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:03.851384  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:03.947183  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:03.951624  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:04.348396  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:04.447643  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:04.451554  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:04.848571  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:04.949789  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:04.951979  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:05.350356  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:05.450935  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:05.453558  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:05.847928  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:05.946542  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:05.951883  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:06.350345  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:06.450847  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:06.453394  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:06.848394  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:06.946485  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:06.951828  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:07.349692  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:07.447385  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:07.452297  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:07.849393  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:07.957936  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:07.958093  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:08.355339  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:08.447917  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:08.453897  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:08.849195  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:08.947667  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:08.952021  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:09.570096  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:09.570104  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:09.570107  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:09.848076  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:09.947066  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:09.951111  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:10.349636  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:10.448136  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:10.452143  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:10.851381  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:10.952538  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:10.953948  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:11.349901  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:11.453477  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:11.453495  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:11.848650  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:11.947300  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:11.952069  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:12.349743  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:12.446712  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:12.453687  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:12.848548  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:12.946674  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:12.952124  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:13.348656  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:13.449230  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:13.451702  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:13.848623  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:13.947173  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:13.951303  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:14.349905  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:14.447739  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:14.453919  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:14.849253  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:14.947796  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:14.952226  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:15.348251  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:15.447244  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:15.451223  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:15.848356  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:15.947247  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:15.951021  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:16.349536  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:16.446833  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 16:33:16.452103  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:16.849099  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:16.947667  157245 kapi.go:107] duration metric: took 59.504288743s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0414 16:33:16.951675  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:17.349541  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:17.452610  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:17.848578  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:17.952559  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:18.349431  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:18.452292  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:18.848861  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:18.951382  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:19.348446  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:19.452745  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:19.848520  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:19.953515  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:20.349217  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:20.452153  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:20.849767  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:20.954626  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:21.349396  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:21.453058  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:21.849393  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:21.952752  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:22.349016  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:22.452003  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:22.848787  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:22.953552  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:23.348180  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:23.452353  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:24.006289  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:24.006395  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:24.348590  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:24.452724  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:24.848755  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:24.952902  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:25.349172  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:25.452537  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:25.848724  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:25.953008  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:26.376878  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:26.626133  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:26.849564  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:26.954952  157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 16:33:27.348973  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:27.451663  157245 kapi.go:107] duration metric: took 1m11.502788648s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0414 16:33:27.848646  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:28.348318  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:28.849726  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:29.348791  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:29.848989  157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 16:33:30.351381  157245 kapi.go:107] duration metric: took 1m11.505944449s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0414 16:33:30.352770  157245 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-411768 cluster.
	I0414 16:33:30.353958  157245 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0414 16:33:30.355101  157245 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0414 16:33:30.356325  157245 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0414 16:33:30.357346  157245 addons.go:514] duration metric: took 1m23.278682879s for enable addons: enabled=[nvidia-device-plugin cloud-spanner amd-gpu-device-plugin storage-provisioner inspektor-gadget metrics-server ingress-dns yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0414 16:33:30.357383  157245 start.go:246] waiting for cluster config update ...
	I0414 16:33:30.357398  157245 start.go:255] writing updated cluster config ...
	I0414 16:33:30.357640  157245 ssh_runner.go:195] Run: rm -f paused
	I0414 16:33:30.409184  157245 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 16:33:30.410660  157245 out.go:177] * Done! kubectl is now configured to use "addons-411768" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.408091865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648588408069639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604414,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3662606-6411-4324-bee1-937418bbdbbb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.414932317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bee1329-b335-4182-a8cb-803574dcd2f3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.414983326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bee1329-b335-4182-a8cb-803574dcd2f3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.415306779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:864bc3bd70a07950a38583f2b3082eb40373b9fcbd2af6feb6147d181d66a10c,PodSandboxId:da1a9986402173c5f2d5aeacb8e485646b508681d0808420010e0152c6bd6873,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744648588234839801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-9xf4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c049a27d-8170-46f8-8ed9-29e70b408cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6e08abce008c9511b4593999124a65069fa49ea9441c21f99e670c293a1068,PodSandboxId:b990b1184119db432e80765ab867512c16cc942ab1222529874f6ad764768338,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744648450683029678,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6fdc475-449b-4a8c-a72c-3d42ef531b1c,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef31858c68b561febaa6af346a0f21d70ca9b2b3765e2305e84d0e21d69fb6d,PodSandboxId:5e0446e945c2909449ffc2b06a012b560037888fdd4e9e4681dd6cd08d9fa4b5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744648413638559611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2799830-4b53-4013-83
79-64bfa1b342a4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d31f652b18cc075da33290ebdcfe04719706fb046161f256ed9b2ac18362871,PodSandboxId:97bf244602bf97382a9da647ec38bbb0c9e835984d41a077a69dc3423faeaca3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744648406753083055,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-h2flx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58048470-a007-4f79-9b05-cc4fe6169041,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7999b897d790728b8ef63657c9e4f11780d63187673cd9f0054d4e4aa6b8444f,PodSandboxId:e43e8c82e534431759c14e648906333bd2965fdefb303c24f8176a1402fb2630,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648386374515997,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtplm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56e4893f-e50c-4e07-aa67-5eac91793235,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab025d3e20c73a414b15ccf9be6abad1ec88729a22aa7861a906948af8397b6a,PodSandboxId:f1fda8ad102985b1525fc3315de4e8205419dd271ca9bd87830894475e3ac0f7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648383322218841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqjfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8fa21e9-a3b4-4266-9b5b-5bd2b8518b0b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305f3d82f4b8539b9c0d7ed4cdf6fac70e369fdd6f0230eee3b9bd5535ab1a2,PodSandboxId:1ae0af87b54a7e3c171ae8ba4c21025bab00e508f1206005eaf9030b091edac2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher
/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744648367937333586,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-vxlrn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 123bd649-9c06-4a40-8c9e-88219f0ea2e3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b0e851c40380144991f853f0683e4ad9ceb9e34f4bdff113726c0d58980165,PodSandboxId:6e8c72550080eeaeb065fffd901f77cd8d486c92dee0ae5bb8b0ee4fa28ba039,Metadata:&ContainerMetadata{Name:amd-gp
u-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744648357458269550,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5sprs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ab44cd-e5cd-47dc-97c9-9b9566809a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e84062959d36d1d67c2b57e182898fa6d4c8c88812627de01b73a6e779bd6be,PodSandboxId:81e1ef9d686de5f615e5a2cadbc014819acaaa42741e55eea96a6a080a6d179b
,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744648355908772322,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52dc595-cef9-487e-9ae1-d5f31774779b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f2
719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564,PodSandboxId:40f2526f24fa491b727aaa3eefce3c3983282312c9d3f31cd6f1afea049852e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744648333887348299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016d9cef-9f4d-4edc-9108-2b5b76533cc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e3fb96f0ab62db9
2d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7,PodSandboxId:f37590b0584b54ef015afff93bfd7470ebadcaba595881881a89d1152a6edd45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744648331230083492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wbtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efde3561-f910-4083-a045-d58c8fdcf7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411,PodSandboxId:7aa9c0248892779b42800bf5da99b04ba2e120c0c4007dd635f40855c8dd750b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744648328482796000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvpxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240f2e9d-199b-4666-8144-1af7bb751178,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67,PodSandboxId:e3f03985ff064594539048ff54615a10308a18d629901f25b55272bccfdc6c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744648317361852695,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240bc76e760adf0d34e672e8e10bfb1f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769,PodSandboxId:3419ef801ad4e6e1e5b0c26d90bccfa60f2a7898a8b8462f9d6cc36f00ee6802,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744648317402210134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8827044f306ba1d367ed9bf7b6d0c8db,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20,PodSandboxId:cbe8e675b255438d6953d82b2ce141ac3bf4bfb4eb9b99bf5f2856609a06d960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744648317379828906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f9dbe492a9a16ef8bdd576105b9300,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3,PodSandboxId:7b1c6bea47c8c09d4f6c05df59ad94fecb1f794e9a27db34685a976d59f93aff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744648317327529934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5658f9d0e92ff7a619dc5f35f6f2df6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bee1329-b335-4182-a8cb-803574dcd2f3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.464191576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d1f1d70-9ed3-4656-b864-fac6729bf900 name=/runtime.v1.RuntimeService/Version
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.464260326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d1f1d70-9ed3-4656-b864-fac6729bf900 name=/runtime.v1.RuntimeService/Version
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.465207887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fadf3eb1-fd77-4bad-aef7-1f7561e7287e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.466627739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648588466602013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604414,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fadf3eb1-fd77-4bad-aef7-1f7561e7287e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.467017229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46816f18-1e01-4e86-87e1-81e3bb7680a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.467095490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46816f18-1e01-4e86-87e1-81e3bb7680a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.467507182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:864bc3bd70a07950a38583f2b3082eb40373b9fcbd2af6feb6147d181d66a10c,PodSandboxId:da1a9986402173c5f2d5aeacb8e485646b508681d0808420010e0152c6bd6873,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744648588234839801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-9xf4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c049a27d-8170-46f8-8ed9-29e70b408cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6e08abce008c9511b4593999124a65069fa49ea9441c21f99e670c293a1068,PodSandboxId:b990b1184119db432e80765ab867512c16cc942ab1222529874f6ad764768338,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744648450683029678,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6fdc475-449b-4a8c-a72c-3d42ef531b1c,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef31858c68b561febaa6af346a0f21d70ca9b2b3765e2305e84d0e21d69fb6d,PodSandboxId:5e0446e945c2909449ffc2b06a012b560037888fdd4e9e4681dd6cd08d9fa4b5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744648413638559611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2799830-4b53-4013-83
79-64bfa1b342a4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d31f652b18cc075da33290ebdcfe04719706fb046161f256ed9b2ac18362871,PodSandboxId:97bf244602bf97382a9da647ec38bbb0c9e835984d41a077a69dc3423faeaca3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744648406753083055,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-h2flx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58048470-a007-4f79-9b05-cc4fe6169041,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7999b897d790728b8ef63657c9e4f11780d63187673cd9f0054d4e4aa6b8444f,PodSandboxId:e43e8c82e534431759c14e648906333bd2965fdefb303c24f8176a1402fb2630,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648386374515997,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtplm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56e4893f-e50c-4e07-aa67-5eac91793235,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab025d3e20c73a414b15ccf9be6abad1ec88729a22aa7861a906948af8397b6a,PodSandboxId:f1fda8ad102985b1525fc3315de4e8205419dd271ca9bd87830894475e3ac0f7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648383322218841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqjfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8fa21e9-a3b4-4266-9b5b-5bd2b8518b0b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305f3d82f4b8539b9c0d7ed4cdf6fac70e369fdd6f0230eee3b9bd5535ab1a2,PodSandboxId:1ae0af87b54a7e3c171ae8ba4c21025bab00e508f1206005eaf9030b091edac2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher
/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744648367937333586,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-vxlrn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 123bd649-9c06-4a40-8c9e-88219f0ea2e3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b0e851c40380144991f853f0683e4ad9ceb9e34f4bdff113726c0d58980165,PodSandboxId:6e8c72550080eeaeb065fffd901f77cd8d486c92dee0ae5bb8b0ee4fa28ba039,Metadata:&ContainerMetadata{Name:amd-gp
u-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744648357458269550,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5sprs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ab44cd-e5cd-47dc-97c9-9b9566809a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e84062959d36d1d67c2b57e182898fa6d4c8c88812627de01b73a6e779bd6be,PodSandboxId:81e1ef9d686de5f615e5a2cadbc014819acaaa42741e55eea96a6a080a6d179b
,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744648355908772322,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52dc595-cef9-487e-9ae1-d5f31774779b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f2
719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564,PodSandboxId:40f2526f24fa491b727aaa3eefce3c3983282312c9d3f31cd6f1afea049852e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744648333887348299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016d9cef-9f4d-4edc-9108-2b5b76533cc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e3fb96f0ab62db9
2d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7,PodSandboxId:f37590b0584b54ef015afff93bfd7470ebadcaba595881881a89d1152a6edd45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744648331230083492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wbtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efde3561-f910-4083-a045-d58c8fdcf7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411,PodSandboxId:7aa9c0248892779b42800bf5da99b04ba2e120c0c4007dd635f40855c8dd750b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744648328482796000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvpxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240f2e9d-199b-4666-8144-1af7bb751178,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67,PodSandboxId:e3f03985ff064594539048ff54615a10308a18d629901f25b55272bccfdc6c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744648317361852695,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240bc76e760adf0d34e672e8e10bfb1f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769,PodSandboxId:3419ef801ad4e6e1e5b0c26d90bccfa60f2a7898a8b8462f9d6cc36f00ee6802,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744648317402210134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8827044f306ba1d367ed9bf7b6d0c8db,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20,PodSandboxId:cbe8e675b255438d6953d82b2ce141ac3bf4bfb4eb9b99bf5f2856609a06d960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744648317379828906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f9dbe492a9a16ef8bdd576105b9300,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3,PodSandboxId:7b1c6bea47c8c09d4f6c05df59ad94fecb1f794e9a27db34685a976d59f93aff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744648317327529934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5658f9d0e92ff7a619dc5f35f6f2df6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46816f18-1e01-4e86-87e1-81e3bb7680a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.501354938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31883136-7842-49fa-bfc4-d7a058fb9fa9 name=/runtime.v1.RuntimeService/Version
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.501488890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31883136-7842-49fa-bfc4-d7a058fb9fa9 name=/runtime.v1.RuntimeService/Version
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.502624438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3c50c56-12cc-4170-98a4-1d7d8011f4d5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.504177422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648588504099379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604414,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3c50c56-12cc-4170-98a4-1d7d8011f4d5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.505119255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3dc87da-52b4-42aa-aa41-d9c85e5f2fb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.505175135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3dc87da-52b4-42aa-aa41-d9c85e5f2fb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.505916114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:864bc3bd70a07950a38583f2b3082eb40373b9fcbd2af6feb6147d181d66a10c,PodSandboxId:da1a9986402173c5f2d5aeacb8e485646b508681d0808420010e0152c6bd6873,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744648588234839801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-9xf4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c049a27d-8170-46f8-8ed9-29e70b408cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6e08abce008c9511b4593999124a65069fa49ea9441c21f99e670c293a1068,PodSandboxId:b990b1184119db432e80765ab867512c16cc942ab1222529874f6ad764768338,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744648450683029678,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6fdc475-449b-4a8c-a72c-3d42ef531b1c,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef31858c68b561febaa6af346a0f21d70ca9b2b3765e2305e84d0e21d69fb6d,PodSandboxId:5e0446e945c2909449ffc2b06a012b560037888fdd4e9e4681dd6cd08d9fa4b5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744648413638559611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2799830-4b53-4013-83
79-64bfa1b342a4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d31f652b18cc075da33290ebdcfe04719706fb046161f256ed9b2ac18362871,PodSandboxId:97bf244602bf97382a9da647ec38bbb0c9e835984d41a077a69dc3423faeaca3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744648406753083055,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-h2flx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58048470-a007-4f79-9b05-cc4fe6169041,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7999b897d790728b8ef63657c9e4f11780d63187673cd9f0054d4e4aa6b8444f,PodSandboxId:e43e8c82e534431759c14e648906333bd2965fdefb303c24f8176a1402fb2630,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648386374515997,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtplm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56e4893f-e50c-4e07-aa67-5eac91793235,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab025d3e20c73a414b15ccf9be6abad1ec88729a22aa7861a906948af8397b6a,PodSandboxId:f1fda8ad102985b1525fc3315de4e8205419dd271ca9bd87830894475e3ac0f7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648383322218841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqjfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8fa21e9-a3b4-4266-9b5b-5bd2b8518b0b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305f3d82f4b8539b9c0d7ed4cdf6fac70e369fdd6f0230eee3b9bd5535ab1a2,PodSandboxId:1ae0af87b54a7e3c171ae8ba4c21025bab00e508f1206005eaf9030b091edac2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher
/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744648367937333586,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-vxlrn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 123bd649-9c06-4a40-8c9e-88219f0ea2e3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b0e851c40380144991f853f0683e4ad9ceb9e34f4bdff113726c0d58980165,PodSandboxId:6e8c72550080eeaeb065fffd901f77cd8d486c92dee0ae5bb8b0ee4fa28ba039,Metadata:&ContainerMetadata{Name:amd-gp
u-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744648357458269550,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5sprs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ab44cd-e5cd-47dc-97c9-9b9566809a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e84062959d36d1d67c2b57e182898fa6d4c8c88812627de01b73a6e779bd6be,PodSandboxId:81e1ef9d686de5f615e5a2cadbc014819acaaa42741e55eea96a6a080a6d179b
,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744648355908772322,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52dc595-cef9-487e-9ae1-d5f31774779b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f2
719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564,PodSandboxId:40f2526f24fa491b727aaa3eefce3c3983282312c9d3f31cd6f1afea049852e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744648333887348299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016d9cef-9f4d-4edc-9108-2b5b76533cc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e3fb96f0ab62db9
2d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7,PodSandboxId:f37590b0584b54ef015afff93bfd7470ebadcaba595881881a89d1152a6edd45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744648331230083492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wbtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efde3561-f910-4083-a045-d58c8fdcf7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411,PodSandboxId:7aa9c0248892779b42800bf5da99b04ba2e120c0c4007dd635f40855c8dd750b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744648328482796000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvpxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240f2e9d-199b-4666-8144-1af7bb751178,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67,PodSandboxId:e3f03985ff064594539048ff54615a10308a18d629901f25b55272bccfdc6c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744648317361852695,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240bc76e760adf0d34e672e8e10bfb1f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769,PodSandboxId:3419ef801ad4e6e1e5b0c26d90bccfa60f2a7898a8b8462f9d6cc36f00ee6802,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744648317402210134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8827044f306ba1d367ed9bf7b6d0c8db,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20,PodSandboxId:cbe8e675b255438d6953d82b2ce141ac3bf4bfb4eb9b99bf5f2856609a06d960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744648317379828906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f9dbe492a9a16ef8bdd576105b9300,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3,PodSandboxId:7b1c6bea47c8c09d4f6c05df59ad94fecb1f794e9a27db34685a976d59f93aff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744648317327529934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5658f9d0e92ff7a619dc5f35f6f2df6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3dc87da-52b4-42aa-aa41-d9c85e5f2fb3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.536609738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=251ad293-8846-46e0-8751-51fa5c0e7de5 name=/runtime.v1.RuntimeService/Version
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.536680629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=251ad293-8846-46e0-8751-51fa5c0e7de5 name=/runtime.v1.RuntimeService/Version
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.542139252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c065b186-ef15-484a-abdd-378433c3a40b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.543500185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648588543477837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604414,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c065b186-ef15-484a-abdd-378433c3a40b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.544152272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6932bd37-f48e-4f0f-b2e9-262c3abdcdfd name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.544203834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6932bd37-f48e-4f0f-b2e9-262c3abdcdfd name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.544593734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:864bc3bd70a07950a38583f2b3082eb40373b9fcbd2af6feb6147d181d66a10c,PodSandboxId:da1a9986402173c5f2d5aeacb8e485646b508681d0808420010e0152c6bd6873,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744648588234839801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-9xf4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c049a27d-8170-46f8-8ed9-29e70b408cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6e08abce008c9511b4593999124a65069fa49ea9441c21f99e670c293a1068,PodSandboxId:b990b1184119db432e80765ab867512c16cc942ab1222529874f6ad764768338,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744648450683029678,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6fdc475-449b-4a8c-a72c-3d42ef531b1c,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef31858c68b561febaa6af346a0f21d70ca9b2b3765e2305e84d0e21d69fb6d,PodSandboxId:5e0446e945c2909449ffc2b06a012b560037888fdd4e9e4681dd6cd08d9fa4b5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744648413638559611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2799830-4b53-4013-83
79-64bfa1b342a4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d31f652b18cc075da33290ebdcfe04719706fb046161f256ed9b2ac18362871,PodSandboxId:97bf244602bf97382a9da647ec38bbb0c9e835984d41a077a69dc3423faeaca3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744648406753083055,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-h2flx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58048470-a007-4f79-9b05-cc4fe6169041,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7999b897d790728b8ef63657c9e4f11780d63187673cd9f0054d4e4aa6b8444f,PodSandboxId:e43e8c82e534431759c14e648906333bd2965fdefb303c24f8176a1402fb2630,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648386374515997,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtplm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56e4893f-e50c-4e07-aa67-5eac91793235,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab025d3e20c73a414b15ccf9be6abad1ec88729a22aa7861a906948af8397b6a,PodSandboxId:f1fda8ad102985b1525fc3315de4e8205419dd271ca9bd87830894475e3ac0f7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648383322218841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqjfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8fa21e9-a3b4-4266-9b5b-5bd2b8518b0b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305f3d82f4b8539b9c0d7ed4cdf6fac70e369fdd6f0230eee3b9bd5535ab1a2,PodSandboxId:1ae0af87b54a7e3c171ae8ba4c21025bab00e508f1206005eaf9030b091edac2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher
/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744648367937333586,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-vxlrn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 123bd649-9c06-4a40-8c9e-88219f0ea2e3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b0e851c40380144991f853f0683e4ad9ceb9e34f4bdff113726c0d58980165,PodSandboxId:6e8c72550080eeaeb065fffd901f77cd8d486c92dee0ae5bb8b0ee4fa28ba039,Metadata:&ContainerMetadata{Name:amd-gp
u-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744648357458269550,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5sprs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ab44cd-e5cd-47dc-97c9-9b9566809a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e84062959d36d1d67c2b57e182898fa6d4c8c88812627de01b73a6e779bd6be,PodSandboxId:81e1ef9d686de5f615e5a2cadbc014819acaaa42741e55eea96a6a080a6d179b
,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744648355908772322,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52dc595-cef9-487e-9ae1-d5f31774779b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f2
719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564,PodSandboxId:40f2526f24fa491b727aaa3eefce3c3983282312c9d3f31cd6f1afea049852e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744648333887348299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016d9cef-9f4d-4edc-9108-2b5b76533cc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e3fb96f0ab62db9
2d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7,PodSandboxId:f37590b0584b54ef015afff93bfd7470ebadcaba595881881a89d1152a6edd45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744648331230083492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wbtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efde3561-f910-4083-a045-d58c8fdcf7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411,PodSandboxId:7aa9c0248892779b42800bf5da99b04ba2e120c0c4007dd635f40855c8dd750b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744648328482796000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvpxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240f2e9d-199b-4666-8144-1af7bb751178,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67,PodSandboxId:e3f03985ff064594539048ff54615a10308a18d629901f25b55272bccfdc6c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744648317361852695,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240bc76e760adf0d34e672e8e10bfb1f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769,PodSandboxId:3419ef801ad4e6e1e5b0c26d90bccfa60f2a7898a8b8462f9d6cc36f00ee6802,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744648317402210134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8827044f306ba1d367ed9bf7b6d0c8db,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20,PodSandboxId:cbe8e675b255438d6953d82b2ce141ac3bf4bfb4eb9b99bf5f2856609a06d960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744648317379828906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f9dbe492a9a16ef8bdd576105b9300,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3,PodSandboxId:7b1c6bea47c8c09d4f6c05df59ad94fecb1f794e9a27db34685a976d59f93aff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744648317327529934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5658f9d0e92ff7a619dc5f35f6f2df6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6932bd37-f48e-4f0f-b2e9-262c3abdcdfd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	864bc3bd70a07       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   da1a998640217       hello-world-app-7d9564db4-9xf4s
	cd6e08abce008       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago            Running             nginx                     0                   b990b1184119d       nginx
	9ef31858c68b5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   5e0446e945c29       busybox
	5d31f652b18cc       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   97bf244602bf9       ingress-nginx-controller-56d7c84fd4-h2flx
	7999b897d7907       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              patch                     0                   e43e8c82e5344       ingress-nginx-admission-patch-qtplm
	ab025d3e20c73       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              create                    0                   f1fda8ad10298       ingress-nginx-admission-create-lqjfj
	e305f3d82f4b8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago            Running             local-path-provisioner    0                   1ae0af87b54a7       local-path-provisioner-76f89f99b5-vxlrn
	72b0e851c4038       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     3 minutes ago            Running             amd-gpu-device-plugin     0                   6e8c72550080e       amd-gpu-device-plugin-5sprs
	0e84062959d36       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             3 minutes ago            Running             minikube-ingress-dns      0                   81e1ef9d686de       kube-ingress-dns-minikube
	693f2719e9989       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   40f2526f24fa4       storage-provisioner
	e4e3fb96f0ab6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago            Running             coredns                   0                   f37590b0584b5       coredns-668d6bf9bc-4wbtn
	71aaac2f1ac40       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago            Running             kube-proxy                0                   7aa9c02488927       kube-proxy-bvpxd
	bbc1ab888a3c6       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago            Running             etcd                      0                   3419ef801ad4e       etcd-addons-411768
	23453a182488d       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago            Running             kube-scheduler            0                   cbe8e675b2554       kube-scheduler-addons-411768
	bd9cf5e8aa14a       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago            Running             kube-controller-manager   0                   e3f03985ff064       kube-controller-manager-addons-411768
	5a2df5cd32b34       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago            Running             kube-apiserver            0                   7b1c6bea47c8c       kube-apiserver-addons-411768
	
	
	==> coredns [e4e3fb96f0ab62db92d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7] <==
	[INFO] 10.244.0.8:56648 - 31824 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000434441s
	[INFO] 10.244.0.8:56648 - 52939 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000086021s
	[INFO] 10.244.0.8:56648 - 5146 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000062888s
	[INFO] 10.244.0.8:56648 - 54071 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000055036s
	[INFO] 10.244.0.8:56648 - 19825 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000201205s
	[INFO] 10.244.0.8:56648 - 58145 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00026493s
	[INFO] 10.244.0.8:56648 - 6063 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000127614s
	[INFO] 10.244.0.8:34023 - 63651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133459s
	[INFO] 10.244.0.8:34023 - 63359 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188386s
	[INFO] 10.244.0.8:51734 - 42848 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014494s
	[INFO] 10.244.0.8:51734 - 42598 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000206876s
	[INFO] 10.244.0.8:42950 - 12729 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090486s
	[INFO] 10.244.0.8:42950 - 12955 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000266683s
	[INFO] 10.244.0.8:48945 - 24007 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000070332s
	[INFO] 10.244.0.8:48945 - 23788 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000051702s
	[INFO] 10.244.0.23:39740 - 57396 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046847s
	[INFO] 10.244.0.23:58503 - 7959 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000136447s
	[INFO] 10.244.0.23:55707 - 17757 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000271569s
	[INFO] 10.244.0.23:58233 - 12356 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000059819s
	[INFO] 10.244.0.23:33554 - 27500 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114099s
	[INFO] 10.244.0.23:45809 - 47241 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123692s
	[INFO] 10.244.0.23:47122 - 16421 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.004433435s
	[INFO] 10.244.0.23:46706 - 13978 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005480704s
	[INFO] 10.244.0.26:41163 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000287036s
	[INFO] 10.244.0.26:43704 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092961s
	
	
	==> describe nodes <==
	Name:               addons-411768
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-411768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37
	                    minikube.k8s.io/name=addons-411768
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T16_32_02_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-411768
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 16:31:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-411768
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 16:36:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 16:34:45 +0000   Mon, 14 Apr 2025 16:31:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 16:34:45 +0000   Mon, 14 Apr 2025 16:31:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 16:34:45 +0000   Mon, 14 Apr 2025 16:31:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 16:34:45 +0000   Mon, 14 Apr 2025 16:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    addons-411768
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 0122c2f2036548bba2ce55793f70c87e
	  System UUID:                0122c2f2-0365-48bb-a2ce-55793f70c87e
	  Boot ID:                    ad8475cb-76ab-45b0-801e-128a5aaf00b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     hello-world-app-7d9564db4-9xf4s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-h2flx    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m13s
	  kube-system                 amd-gpu-device-plugin-5sprs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 coredns-668d6bf9bc-4wbtn                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m21s
	  kube-system                 etcd-addons-411768                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m26s
	  kube-system                 kube-apiserver-addons-411768                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-controller-manager-addons-411768        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-bvpxd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-scheduler-addons-411768                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  local-path-storage          local-path-provisioner-76f89f99b5-vxlrn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m19s  kube-proxy       
	  Normal  Starting                 4m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m26s  kubelet          Node addons-411768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s  kubelet          Node addons-411768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s  kubelet          Node addons-411768 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m25s  kubelet          Node addons-411768 status is now: NodeReady
	  Normal  RegisteredNode           4m22s  node-controller  Node addons-411768 event: Registered Node addons-411768 in Controller
	
	
	==> dmesg <==
	[Apr14 16:32] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.075623] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.282672] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.120699] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.002509] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.018593] kauditd_printk_skb: 128 callbacks suppressed
	[ +11.897243] kauditd_printk_skb: 95 callbacks suppressed
	[ +15.354401] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.021916] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.347674] kauditd_printk_skb: 2 callbacks suppressed
	[Apr14 16:33] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.806465] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.546054] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.179997] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.198527] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.985718] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.522411] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.272183] kauditd_printk_skb: 6 callbacks suppressed
	[Apr14 16:34] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.577447] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.125514] kauditd_printk_skb: 64 callbacks suppressed
	[  +6.023287] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.271265] kauditd_printk_skb: 27 callbacks suppressed
	[  +7.483009] kauditd_printk_skb: 15 callbacks suppressed
	[Apr14 16:36] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769] <==
	{"level":"info","ts":"2025-04-14T16:33:23.968099Z","caller":"traceutil/trace.go:171","msg":"trace[938940998] linearizableReadLoop","detail":"{readStateIndex:1129; appliedIndex:1128; }","duration":"141.897316ms","start":"2025-04-14T16:33:23.826177Z","end":"2025-04-14T16:33:23.968074Z","steps":["trace[938940998] 'read index received'  (duration: 141.451789ms)","trace[938940998] 'applied index is now lower than readState.Index'  (duration: 444.829µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T16:33:23.968636Z","caller":"traceutil/trace.go:171","msg":"trace[1893541846] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"189.047518ms","start":"2025-04-14T16:33:23.779569Z","end":"2025-04-14T16:33:23.968616Z","steps":["trace[1893541846] 'process raft request'  (duration: 188.174025ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T16:33:23.968723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.539694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T16:33:23.970899Z","caller":"traceutil/trace.go:171","msg":"trace[1145160022] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"144.749538ms","start":"2025-04-14T16:33:23.826133Z","end":"2025-04-14T16:33:23.970882Z","steps":["trace[1145160022] 'agreement among raft nodes before linearized reading'  (duration: 142.531609ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T16:33:23.971624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.107161ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T16:33:23.973052Z","caller":"traceutil/trace.go:171","msg":"trace[1761018362] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1095; }","duration":"112.553145ms","start":"2025-04-14T16:33:23.860486Z","end":"2025-04-14T16:33:23.973039Z","steps":["trace[1761018362] 'agreement among raft nodes before linearized reading'  (duration: 110.901558ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T16:33:26.353516Z","caller":"traceutil/trace.go:171","msg":"trace[431653975] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"203.932373ms","start":"2025-04-14T16:33:26.149565Z","end":"2025-04-14T16:33:26.353498Z","steps":["trace[431653975] 'process raft request'  (duration: 203.562072ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T16:33:26.602861Z","caller":"traceutil/trace.go:171","msg":"trace[655271812] linearizableReadLoop","detail":"{readStateIndex:1133; appliedIndex:1132; }","duration":"172.728115ms","start":"2025-04-14T16:33:26.430118Z","end":"2025-04-14T16:33:26.602846Z","steps":["trace[655271812] 'read index received'  (duration: 166.415679ms)","trace[655271812] 'applied index is now lower than readState.Index'  (duration: 6.311694ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T16:33:26.602955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.835975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T16:33:26.602974Z","caller":"traceutil/trace.go:171","msg":"trace[847519126] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"172.889492ms","start":"2025-04-14T16:33:26.430079Z","end":"2025-04-14T16:33:26.602969Z","steps":["trace[847519126] 'agreement among raft nodes before linearized reading'  (duration: 172.83735ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T16:33:57.046717Z","caller":"traceutil/trace.go:171","msg":"trace[1567630474] linearizableReadLoop","detail":"{readStateIndex:1346; appliedIndex:1345; }","duration":"185.807875ms","start":"2025-04-14T16:33:56.860871Z","end":"2025-04-14T16:33:57.046679Z","steps":["trace[1567630474] 'read index received'  (duration: 185.63865ms)","trace[1567630474] 'applied index is now lower than readState.Index'  (duration: 168.572µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T16:33:57.046976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.065185ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T16:33:57.047088Z","caller":"traceutil/trace.go:171","msg":"trace[301941662] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1302; }","duration":"186.184667ms","start":"2025-04-14T16:33:56.860865Z","end":"2025-04-14T16:33:57.047049Z","steps":["trace[301941662] 'agreement among raft nodes before linearized reading'  (duration: 185.993502ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T16:33:57.047832Z","caller":"traceutil/trace.go:171","msg":"trace[133768102] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"294.896315ms","start":"2025-04-14T16:33:56.752921Z","end":"2025-04-14T16:33:57.047817Z","steps":["trace[133768102] 'process raft request'  (duration: 293.628605ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T16:33:57.047230Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.58117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2025-04-14T16:33:57.048652Z","caller":"traceutil/trace.go:171","msg":"trace[1666697132] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1302; }","duration":"134.053193ms","start":"2025-04-14T16:33:56.914585Z","end":"2025-04-14T16:33:57.048638Z","steps":["trace[1666697132] 'agreement among raft nodes before linearized reading'  (duration: 132.513931ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T16:34:26.836379Z","caller":"traceutil/trace.go:171","msg":"trace[2117610398] linearizableReadLoop","detail":"{readStateIndex:1665; appliedIndex:1664; }","duration":"236.383176ms","start":"2025-04-14T16:34:26.599980Z","end":"2025-04-14T16:34:26.836363Z","steps":["trace[2117610398] 'read index received'  (duration: 234.762148ms)","trace[2117610398] 'applied index is now lower than readState.Index'  (duration: 1.62031ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T16:34:26.836598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.595853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" limit:1 ","response":"range_response_count:1 size:1594"}
	{"level":"info","ts":"2025-04-14T16:34:26.837143Z","caller":"traceutil/trace.go:171","msg":"trace[2012354821] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1605; }","duration":"237.175837ms","start":"2025-04-14T16:34:26.599959Z","end":"2025-04-14T16:34:26.837135Z","steps":["trace[2012354821] 'agreement among raft nodes before linearized reading'  (duration: 236.536431ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T16:34:26.836829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.712358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-04-14T16:34:26.837283Z","caller":"traceutil/trace.go:171","msg":"trace[1401235009] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1605; }","duration":"203.184965ms","start":"2025-04-14T16:34:26.634091Z","end":"2025-04-14T16:34:26.837276Z","steps":["trace[1401235009] 'agreement among raft nodes before linearized reading'  (duration: 202.692748ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T16:34:26.837014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.401219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T16:34:26.837475Z","caller":"traceutil/trace.go:171","msg":"trace[753201394] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1605; }","duration":"105.882396ms","start":"2025-04-14T16:34:26.731584Z","end":"2025-04-14T16:34:26.837467Z","steps":["trace[753201394] 'agreement among raft nodes before linearized reading'  (duration: 105.408369ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T16:34:26.837042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.150068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T16:34:26.838299Z","caller":"traceutil/trace.go:171","msg":"trace[202009545] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1605; }","duration":"160.427821ms","start":"2025-04-14T16:34:26.677863Z","end":"2025-04-14T16:34:26.838291Z","steps":["trace[202009545] 'agreement among raft nodes before linearized reading'  (duration: 159.161943ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:36:28 up 5 min,  0 users,  load average: 1.87, 1.59, 0.76
	Linux addons-411768 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3] <==
	E0414 16:32:48.752106       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.147.230:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.147.230:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.147.230:443: connect: connection refused" logger="UnhandledError"
	I0414 16:32:48.813462       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0414 16:33:41.188330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.237:8443->192.168.39.1:34348: use of closed network connection
	E0414 16:33:41.371979       1 conn.go:339] Error on socket receive: read tcp 192.168.39.237:8443->192.168.39.1:34368: use of closed network connection
	I0414 16:33:50.579236       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.88.251"}
	I0414 16:34:01.527282       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0414 16:34:02.561160       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0414 16:34:07.310110       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0414 16:34:07.587278       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.200.21"}
	I0414 16:34:16.090874       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0414 16:34:33.459998       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 16:34:33.460268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 16:34:33.552764       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 16:34:33.552837       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 16:34:33.562107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 16:34:33.562200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 16:34:33.578159       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 16:34:33.578217       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 16:34:33.616937       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 16:34:33.616982       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0414 16:34:34.578296       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0414 16:34:34.621320       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0414 16:34:34.628135       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0414 16:34:49.757540       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0414 16:36:27.151139       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.173.71"}
	
	
	==> kube-controller-manager [bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67] <==
	E0414 16:35:18.690132       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 16:35:44.444009       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 16:35:44.445012       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0414 16:35:44.445910       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 16:35:44.445973       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 16:35:44.609952       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 16:35:44.610773       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0414 16:35:44.611703       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 16:35:44.611746       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 16:35:59.312792       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 16:35:59.313936       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 16:35:59.314866       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 16:35:59.314904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 16:36:00.060140       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 16:36:00.061129       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0414 16:36:00.061966       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 16:36:00.062009       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0414 16:36:26.955059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="28.076569ms"
	I0414 16:36:26.973778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="18.625163ms"
	I0414 16:36:26.973899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="26.378µs"
	I0414 16:36:26.980820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="71.272µs"
	W0414 16:36:27.438947       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 16:36:27.439839       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0414 16:36:27.441524       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 16:36:27.441557       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 16:32:09.679906       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 16:32:09.691323       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
	E0414 16:32:09.691377       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 16:32:09.773916       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 16:32:09.773949       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 16:32:09.773971       1 server_linux.go:170] "Using iptables Proxier"
	I0414 16:32:09.776292       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 16:32:09.776548       1 server.go:497] "Version info" version="v1.32.2"
	I0414 16:32:09.776560       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 16:32:09.780192       1 config.go:199] "Starting service config controller"
	I0414 16:32:09.780216       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 16:32:09.780239       1 config.go:105] "Starting endpoint slice config controller"
	I0414 16:32:09.780243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 16:32:09.780772       1 config.go:329] "Starting node config controller"
	I0414 16:32:09.780779       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 16:32:09.880529       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 16:32:09.880572       1 shared_informer.go:320] Caches are synced for service config
	I0414 16:32:09.880829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20] <==
	W0414 16:31:59.776014       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 16:31:59.776024       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 16:31:59.776068       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 16:31:59.776097       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 16:31:59.776131       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 16:31:59.776142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.586860       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 16:32:00.586922       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.708461       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 16:32:00.708510       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.746525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 16:32:00.746576       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.825552       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 16:32:00.826210       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.857572       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 16:32:00.857618       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.860214       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 16:32:00.860272       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.883621       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0414 16:32:00.883666       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.893741       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 16:32:00.893784       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 16:32:00.916557       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0414 16:32:00.916606       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0414 16:32:01.370664       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 16:36:02 addons-411768 kubelet[1238]: E0414 16:36:02.124851    1238 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 16:36:02 addons-411768 kubelet[1238]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 16:36:02 addons-411768 kubelet[1238]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 16:36:02 addons-411768 kubelet[1238]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 16:36:02 addons-411768 kubelet[1238]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 16:36:02 addons-411768 kubelet[1238]: E0414 16:36:02.545083    1238 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648562544657654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 16:36:02 addons-411768 kubelet[1238]: E0414 16:36:02.545107    1238 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648562544657654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 16:36:12 addons-411768 kubelet[1238]: E0414 16:36:12.547068    1238 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648572546809660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 16:36:12 addons-411768 kubelet[1238]: E0414 16:36:12.547108    1238 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648572546809660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 16:36:22 addons-411768 kubelet[1238]: E0414 16:36:22.550160    1238 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648582549852290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 16:36:22 addons-411768 kubelet[1238]: E0414 16:36:22.550210    1238 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648582549852290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.952842    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="node-driver-registrar"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953276    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="08f31c66-3f2c-442e-bed1-d74113220a4c" containerName="volume-snapshot-controller"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953362    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="ae23e104-95da-40ae-80b9-0400fb264d20" containerName="volume-snapshot-controller"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953394    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="hostpath"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953527    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="liveness-probe"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953561    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="csi-provisioner"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953645    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="1585fae8-d827-4996-8f81-6d06a66b84ee" containerName="cloud-spanner-emulator"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953682    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="csi-snapshotter"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953765    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="1c5ebede-4ffc-4554-98e8-6b877134818e" containerName="csi-resizer"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953800    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="e5a2d34f-7429-47b0-9239-917c6907123c" containerName="nvidia-device-plugin-ctr"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953888    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="csi-external-health-monitor-controller"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953920    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="779860dd-6f16-40ee-a078-aa1f4dd024cb" containerName="task-pv-container"
	Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.954004    1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed55eafd-36ee-4183-9d67-d584935ba068" containerName="csi-attacher"
	Apr 14 16:36:27 addons-411768 kubelet[1238]: I0414 16:36:27.010201    1238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h92n\" (UniqueName: \"kubernetes.io/projected/c049a27d-8170-46f8-8ed9-29e70b408cdb-kube-api-access-4h92n\") pod \"hello-world-app-7d9564db4-9xf4s\" (UID: \"c049a27d-8170-46f8-8ed9-29e70b408cdb\") " pod="default/hello-world-app-7d9564db4-9xf4s"
	
	
	==> storage-provisioner [693f2719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564] <==
	I0414 16:32:14.574029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 16:32:14.623936       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 16:32:14.623991       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 16:32:14.644293       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 16:32:14.645063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-411768_e0494113-686a-45d6-952b-08fb9770703b!
	I0414 16:32:14.646931       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6dd6cea6-55be-4fcc-9f09-dedf5a4b05e4", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-411768_e0494113-686a-45d6-952b-08fb9770703b became leader
	I0414 16:32:14.746638       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-411768_e0494113-686a-45d6-952b-08fb9770703b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-411768 -n addons-411768
helpers_test.go:261: (dbg) Run:  kubectl --context addons-411768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-lqjfj ingress-nginx-admission-patch-qtplm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-411768 describe pod ingress-nginx-admission-create-lqjfj ingress-nginx-admission-patch-qtplm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-411768 describe pod ingress-nginx-admission-create-lqjfj ingress-nginx-admission-patch-qtplm: exit status 1 (53.641652ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lqjfj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qtplm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-411768 describe pod ingress-nginx-admission-create-lqjfj ingress-nginx-admission-patch-qtplm: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable ingress-dns --alsologtostderr -v=1: (1.057950232s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable ingress --alsologtostderr -v=1: (7.674564648s)
--- FAIL: TestAddons/parallel/Ingress (151.35s)

                                                
                                    
x
+
TestPreload (162.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-120543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-120543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m29.31547047s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-120543 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-120543 image pull gcr.io/k8s-minikube/busybox: (2.438902213s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-120543
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-120543: (7.284466011s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-120543 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-120543 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.585220981s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-120543 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-14 17:26:58.712787376 +0000 UTC m=+3354.391115894
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-120543 -n test-preload-120543
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-120543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-120543 logs -n 25: (1.001025411s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-326457 ssh -n                                                                 | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC | 14 Apr 25 17:12 UTC |
	|         | multinode-326457-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-326457 ssh -n multinode-326457 sudo cat                                       | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC | 14 Apr 25 17:12 UTC |
	|         | /home/docker/cp-test_multinode-326457-m03_multinode-326457.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-326457 cp multinode-326457-m03:/home/docker/cp-test.txt                       | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC | 14 Apr 25 17:12 UTC |
	|         | multinode-326457-m02:/home/docker/cp-test_multinode-326457-m03_multinode-326457-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-326457 ssh -n                                                                 | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC | 14 Apr 25 17:12 UTC |
	|         | multinode-326457-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-326457 ssh -n multinode-326457-m02 sudo cat                                   | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC | 14 Apr 25 17:12 UTC |
	|         | /home/docker/cp-test_multinode-326457-m03_multinode-326457-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-326457 node stop m03                                                          | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC | 14 Apr 25 17:12 UTC |
	| node    | multinode-326457 node start                                                             | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC | 14 Apr 25 17:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-326457                                                                | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC |                     |
	| stop    | -p multinode-326457                                                                     | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:12 UTC | 14 Apr 25 17:15 UTC |
	| start   | -p multinode-326457                                                                     | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:15 UTC | 14 Apr 25 17:18 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-326457                                                                | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:18 UTC |                     |
	| node    | multinode-326457 node delete                                                            | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:18 UTC | 14 Apr 25 17:18 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-326457 stop                                                                   | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:18 UTC | 14 Apr 25 17:21 UTC |
	| start   | -p multinode-326457                                                                     | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:21 UTC | 14 Apr 25 17:23 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-326457                                                                | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:23 UTC |                     |
	| start   | -p multinode-326457-m02                                                                 | multinode-326457-m02 | jenkins | v1.35.0 | 14 Apr 25 17:23 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-326457-m03                                                                 | multinode-326457-m03 | jenkins | v1.35.0 | 14 Apr 25 17:23 UTC | 14 Apr 25 17:24 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-326457                                                                 | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:24 UTC |                     |
	| delete  | -p multinode-326457-m03                                                                 | multinode-326457-m03 | jenkins | v1.35.0 | 14 Apr 25 17:24 UTC | 14 Apr 25 17:24 UTC |
	| delete  | -p multinode-326457                                                                     | multinode-326457     | jenkins | v1.35.0 | 14 Apr 25 17:24 UTC | 14 Apr 25 17:24 UTC |
	| start   | -p test-preload-120543                                                                  | test-preload-120543  | jenkins | v1.35.0 | 14 Apr 25 17:24 UTC | 14 Apr 25 17:25 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-120543 image pull                                                          | test-preload-120543  | jenkins | v1.35.0 | 14 Apr 25 17:25 UTC | 14 Apr 25 17:25 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-120543                                                                  | test-preload-120543  | jenkins | v1.35.0 | 14 Apr 25 17:25 UTC | 14 Apr 25 17:25 UTC |
	| start   | -p test-preload-120543                                                                  | test-preload-120543  | jenkins | v1.35.0 | 14 Apr 25 17:25 UTC | 14 Apr 25 17:26 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-120543 image list                                                          | test-preload-120543  | jenkins | v1.35.0 | 14 Apr 25 17:26 UTC | 14 Apr 25 17:26 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:25:57
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:25:57.962060  187666 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:25:57.962304  187666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:25:57.962315  187666 out.go:358] Setting ErrFile to fd 2...
	I0414 17:25:57.962319  187666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:25:57.962518  187666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:25:57.963020  187666 out.go:352] Setting JSON to false
	I0414 17:25:57.963865  187666 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7656,"bootTime":1744643902,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:25:57.963916  187666 start.go:139] virtualization: kvm guest
	I0414 17:25:57.966103  187666 out.go:177] * [test-preload-120543] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:25:57.967719  187666 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:25:57.967739  187666 notify.go:220] Checking for updates...
	I0414 17:25:57.970496  187666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:25:57.971996  187666 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:25:57.973496  187666 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:25:57.974933  187666 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:25:57.976480  187666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:25:57.978470  187666 config.go:182] Loaded profile config "test-preload-120543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 17:25:57.978887  187666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:25:57.978938  187666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:25:57.993654  187666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40751
	I0414 17:25:57.994146  187666 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:25:57.994647  187666 main.go:141] libmachine: Using API Version  1
	I0414 17:25:57.994671  187666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:25:57.995127  187666 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:25:57.995352  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:25:57.997435  187666 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 17:25:57.998920  187666 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:25:57.999204  187666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:25:57.999236  187666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:25:58.013054  187666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45489
	I0414 17:25:58.013388  187666 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:25:58.013766  187666 main.go:141] libmachine: Using API Version  1
	I0414 17:25:58.013787  187666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:25:58.014129  187666 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:25:58.014303  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:25:58.046771  187666 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 17:25:58.048000  187666 start.go:297] selected driver: kvm2
	I0414 17:25:58.048010  187666 start.go:901] validating driver "kvm2" against &{Name:test-preload-120543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-120543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:25:58.048097  187666 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:25:58.048681  187666 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:25:58.048740  187666 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:25:58.062194  187666 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:25:58.062505  187666 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:25:58.062533  187666 cni.go:84] Creating CNI manager for ""
	I0414 17:25:58.062571  187666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:25:58.062615  187666 start.go:340] cluster config:
	{Name:test-preload-120543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-120543 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:25:58.062705  187666 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:25:58.065298  187666 out.go:177] * Starting "test-preload-120543" primary control-plane node in "test-preload-120543" cluster
	I0414 17:25:58.066567  187666 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 17:25:58.088880  187666 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 17:25:58.088903  187666 cache.go:56] Caching tarball of preloaded images
	I0414 17:25:58.088996  187666 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 17:25:58.090639  187666 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0414 17:25:58.092124  187666 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 17:25:58.115151  187666 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 17:26:01.575895  187666 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 17:26:01.576001  187666 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 17:26:02.417223  187666 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0414 17:26:02.417348  187666 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/config.json ...
	I0414 17:26:02.417618  187666 start.go:360] acquireMachinesLock for test-preload-120543: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:26:02.417699  187666 start.go:364] duration metric: took 57.362µs to acquireMachinesLock for "test-preload-120543"
	I0414 17:26:02.417724  187666 start.go:96] Skipping create...Using existing machine configuration
	I0414 17:26:02.417736  187666 fix.go:54] fixHost starting: 
	I0414 17:26:02.418065  187666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:26:02.418112  187666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:26:02.432522  187666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I0414 17:26:02.433000  187666 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:26:02.433442  187666 main.go:141] libmachine: Using API Version  1
	I0414 17:26:02.433464  187666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:26:02.433741  187666 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:26:02.433953  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:02.434064  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetState
	I0414 17:26:02.435767  187666 fix.go:112] recreateIfNeeded on test-preload-120543: state=Stopped err=<nil>
	I0414 17:26:02.435784  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	W0414 17:26:02.435916  187666 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 17:26:02.438949  187666 out.go:177] * Restarting existing kvm2 VM for "test-preload-120543" ...
	I0414 17:26:02.440366  187666 main.go:141] libmachine: (test-preload-120543) Calling .Start
	I0414 17:26:02.440519  187666 main.go:141] libmachine: (test-preload-120543) starting domain...
	I0414 17:26:02.440540  187666 main.go:141] libmachine: (test-preload-120543) ensuring networks are active...
	I0414 17:26:02.441256  187666 main.go:141] libmachine: (test-preload-120543) Ensuring network default is active
	I0414 17:26:02.441585  187666 main.go:141] libmachine: (test-preload-120543) Ensuring network mk-test-preload-120543 is active
	I0414 17:26:02.441974  187666 main.go:141] libmachine: (test-preload-120543) getting domain XML...
	I0414 17:26:02.442701  187666 main.go:141] libmachine: (test-preload-120543) creating domain...
	I0414 17:26:03.625063  187666 main.go:141] libmachine: (test-preload-120543) waiting for IP...
	I0414 17:26:03.625951  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:03.626333  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:03.626414  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:03.626348  187717 retry.go:31] will retry after 224.724082ms: waiting for domain to come up
	I0414 17:26:03.852831  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:03.853312  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:03.853340  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:03.853249  187717 retry.go:31] will retry after 243.122946ms: waiting for domain to come up
	I0414 17:26:04.097860  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:04.098421  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:04.098449  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:04.098384  187717 retry.go:31] will retry after 310.863151ms: waiting for domain to come up
	I0414 17:26:04.410903  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:04.411418  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:04.411440  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:04.411349  187717 retry.go:31] will retry after 503.129822ms: waiting for domain to come up
	I0414 17:26:04.916003  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:04.916392  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:04.916416  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:04.916372  187717 retry.go:31] will retry after 473.051121ms: waiting for domain to come up
	I0414 17:26:05.391234  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:05.391691  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:05.391723  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:05.391646  187717 retry.go:31] will retry after 771.395572ms: waiting for domain to come up
	I0414 17:26:06.164504  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:06.164918  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:06.164950  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:06.164858  187717 retry.go:31] will retry after 974.709058ms: waiting for domain to come up
	I0414 17:26:07.140867  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:07.141273  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:07.141296  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:07.141272  187717 retry.go:31] will retry after 1.442689981s: waiting for domain to come up
	I0414 17:26:08.585093  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:08.585518  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:08.585545  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:08.585487  187717 retry.go:31] will retry after 1.388413412s: waiting for domain to come up
	I0414 17:26:09.975976  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:09.976524  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:09.976556  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:09.976465  187717 retry.go:31] will retry after 2.15545304s: waiting for domain to come up
	I0414 17:26:12.133740  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:12.134223  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:12.134253  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:12.134178  187717 retry.go:31] will retry after 1.824386068s: waiting for domain to come up
	I0414 17:26:13.960034  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:13.960458  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:13.960483  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:13.960432  187717 retry.go:31] will retry after 3.374771123s: waiting for domain to come up
	I0414 17:26:17.338925  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:17.339298  187666 main.go:141] libmachine: (test-preload-120543) DBG | unable to find current IP address of domain test-preload-120543 in network mk-test-preload-120543
	I0414 17:26:17.339326  187666 main.go:141] libmachine: (test-preload-120543) DBG | I0414 17:26:17.339262  187717 retry.go:31] will retry after 4.100835093s: waiting for domain to come up
	I0414 17:26:21.442080  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.442544  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has current primary IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.442562  187666 main.go:141] libmachine: (test-preload-120543) found domain IP: 192.168.39.53
	I0414 17:26:21.442570  187666 main.go:141] libmachine: (test-preload-120543) reserving static IP address...
	I0414 17:26:21.443022  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "test-preload-120543", mac: "52:54:00:98:bb:0c", ip: "192.168.39.53"} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:21.443051  187666 main.go:141] libmachine: (test-preload-120543) DBG | skip adding static IP to network mk-test-preload-120543 - found existing host DHCP lease matching {name: "test-preload-120543", mac: "52:54:00:98:bb:0c", ip: "192.168.39.53"}
	I0414 17:26:21.443065  187666 main.go:141] libmachine: (test-preload-120543) reserved static IP address 192.168.39.53 for domain test-preload-120543
	I0414 17:26:21.443075  187666 main.go:141] libmachine: (test-preload-120543) waiting for SSH...
	I0414 17:26:21.443083  187666 main.go:141] libmachine: (test-preload-120543) DBG | Getting to WaitForSSH function...
	I0414 17:26:21.444980  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.445262  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:21.445290  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.445371  187666 main.go:141] libmachine: (test-preload-120543) DBG | Using SSH client type: external
	I0414 17:26:21.445399  187666 main.go:141] libmachine: (test-preload-120543) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/test-preload-120543/id_rsa (-rw-------)
	I0414 17:26:21.445437  187666 main.go:141] libmachine: (test-preload-120543) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/test-preload-120543/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:26:21.445447  187666 main.go:141] libmachine: (test-preload-120543) DBG | About to run SSH command:
	I0414 17:26:21.445453  187666 main.go:141] libmachine: (test-preload-120543) DBG | exit 0
	I0414 17:26:21.565259  187666 main.go:141] libmachine: (test-preload-120543) DBG | SSH cmd err, output: <nil>: 
	I0414 17:26:21.565623  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetConfigRaw
	I0414 17:26:21.566283  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetIP
	I0414 17:26:21.568552  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.568845  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:21.568875  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.569104  187666 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/config.json ...
	I0414 17:26:21.569319  187666 machine.go:93] provisionDockerMachine start ...
	I0414 17:26:21.569338  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:21.569545  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:21.571411  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.571673  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:21.571696  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.571814  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:21.571981  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:21.572104  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:21.572235  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:21.572370  187666 main.go:141] libmachine: Using SSH client type: native
	I0414 17:26:21.572632  187666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0414 17:26:21.572644  187666 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:26:21.673624  187666 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 17:26:21.673661  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetMachineName
	I0414 17:26:21.673882  187666 buildroot.go:166] provisioning hostname "test-preload-120543"
	I0414 17:26:21.673911  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetMachineName
	I0414 17:26:21.674079  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:21.676400  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.676647  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:21.676670  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.676878  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:21.677046  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:21.677205  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:21.677341  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:21.677482  187666 main.go:141] libmachine: Using SSH client type: native
	I0414 17:26:21.677682  187666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0414 17:26:21.677698  187666 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-120543 && echo "test-preload-120543" | sudo tee /etc/hostname
	I0414 17:26:21.791601  187666 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-120543
	
	I0414 17:26:21.791638  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:21.794184  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.794526  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:21.794550  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.794749  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:21.794933  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:21.795099  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:21.795250  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:21.795401  187666 main.go:141] libmachine: Using SSH client type: native
	I0414 17:26:21.795719  187666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0414 17:26:21.795745  187666 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-120543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-120543/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-120543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:26:21.903965  187666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:26:21.903992  187666 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:26:21.904009  187666 buildroot.go:174] setting up certificates
	I0414 17:26:21.904018  187666 provision.go:84] configureAuth start
	I0414 17:26:21.904026  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetMachineName
	I0414 17:26:21.904261  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetIP
	I0414 17:26:21.906796  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.907165  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:21.907207  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.907334  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:21.909185  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.909470  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:21.909496  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:21.909582  187666 provision.go:143] copyHostCerts
	I0414 17:26:21.909637  187666 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:26:21.909660  187666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:26:21.909736  187666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:26:21.909866  187666 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:26:21.909877  187666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:26:21.909921  187666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:26:21.910000  187666 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:26:21.910012  187666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:26:21.910048  187666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:26:21.910161  187666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.test-preload-120543 san=[127.0.0.1 192.168.39.53 localhost minikube test-preload-120543]
	I0414 17:26:22.000897  187666 provision.go:177] copyRemoteCerts
	I0414 17:26:22.000950  187666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:26:22.000972  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:22.003386  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.003679  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:22.003707  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.003841  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:22.004014  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:22.004169  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:22.004308  187666 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/test-preload-120543/id_rsa Username:docker}
	I0414 17:26:22.083607  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:26:22.106768  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0414 17:26:22.128850  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 17:26:22.150759  187666 provision.go:87] duration metric: took 246.730426ms to configureAuth
	I0414 17:26:22.150786  187666 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:26:22.150946  187666 config.go:182] Loaded profile config "test-preload-120543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 17:26:22.151017  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:22.153596  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.153884  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:22.153913  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.154071  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:22.154247  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:22.154393  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:22.154529  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:22.154704  187666 main.go:141] libmachine: Using SSH client type: native
	I0414 17:26:22.154889  187666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0414 17:26:22.154901  187666 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:26:22.368663  187666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:26:22.368749  187666 machine.go:96] duration metric: took 799.412642ms to provisionDockerMachine
	I0414 17:26:22.368768  187666 start.go:293] postStartSetup for "test-preload-120543" (driver="kvm2")
	I0414 17:26:22.368784  187666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:26:22.368817  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:22.369148  187666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:26:22.369184  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:22.371641  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.371886  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:22.371916  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.372105  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:22.372277  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:22.372409  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:22.372531  187666 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/test-preload-120543/id_rsa Username:docker}
	I0414 17:26:22.451621  187666 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:26:22.455669  187666 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:26:22.455688  187666 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:26:22.455747  187666 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:26:22.455819  187666 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:26:22.455899  187666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:26:22.464579  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:26:22.487134  187666 start.go:296] duration metric: took 118.352583ms for postStartSetup
	I0414 17:26:22.487170  187666 fix.go:56] duration metric: took 20.069434264s for fixHost
	I0414 17:26:22.487192  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:22.489843  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.490171  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:22.490195  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.490367  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:22.490549  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:22.490704  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:22.490817  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:22.490991  187666 main.go:141] libmachine: Using SSH client type: native
	I0414 17:26:22.491220  187666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0414 17:26:22.491232  187666 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:26:22.590163  187666 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744651582.550862359
	
	I0414 17:26:22.590183  187666 fix.go:216] guest clock: 1744651582.550862359
	I0414 17:26:22.590190  187666 fix.go:229] Guest: 2025-04-14 17:26:22.550862359 +0000 UTC Remote: 2025-04-14 17:26:22.487174585 +0000 UTC m=+24.559894726 (delta=63.687774ms)
	I0414 17:26:22.590227  187666 fix.go:200] guest clock delta is within tolerance: 63.687774ms
	I0414 17:26:22.590231  187666 start.go:83] releasing machines lock for "test-preload-120543", held for 20.172518882s
	I0414 17:26:22.590252  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:22.590520  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetIP
	I0414 17:26:22.593010  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.593393  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:22.593423  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.593517  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:22.593953  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:22.594118  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:22.594200  187666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:26:22.594242  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:22.594354  187666 ssh_runner.go:195] Run: cat /version.json
	I0414 17:26:22.594379  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:22.596307  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.596634  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:22.596660  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.596783  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:22.598815  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.599088  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:22.599124  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:22.599223  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:22.599286  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:22.599359  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:22.599403  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:22.599476  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:22.599494  187666 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/test-preload-120543/id_rsa Username:docker}
	I0414 17:26:22.599581  187666 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/test-preload-120543/id_rsa Username:docker}
	I0414 17:26:22.684170  187666 ssh_runner.go:195] Run: systemctl --version
	I0414 17:26:22.704615  187666 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:26:22.852514  187666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:26:22.859391  187666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:26:22.859458  187666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:26:22.874791  187666 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:26:22.874810  187666 start.go:495] detecting cgroup driver to use...
	I0414 17:26:22.874862  187666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:26:22.890707  187666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:26:22.904246  187666 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:26:22.904285  187666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:26:22.917262  187666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:26:22.930180  187666 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:26:23.046276  187666 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:26:23.187881  187666 docker.go:233] disabling docker service ...
	I0414 17:26:23.187960  187666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:26:23.202410  187666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:26:23.214765  187666 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:26:23.354266  187666 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:26:23.468324  187666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:26:23.482255  187666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:26:23.499702  187666 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0414 17:26:23.499760  187666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:26:23.516385  187666 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:26:23.516444  187666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:26:23.526688  187666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:26:23.536664  187666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:26:23.546728  187666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:26:23.557068  187666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:26:23.567148  187666 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:26:23.583176  187666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:26:23.593189  187666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:26:23.602401  187666 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:26:23.602482  187666 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:26:23.616006  187666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:26:23.625083  187666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:26:23.737059  187666 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:26:23.831941  187666 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:26:23.832005  187666 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:26:23.836457  187666 start.go:563] Will wait 60s for crictl version
	I0414 17:26:23.836492  187666 ssh_runner.go:195] Run: which crictl
	I0414 17:26:23.839992  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:26:23.882191  187666 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:26:23.882277  187666 ssh_runner.go:195] Run: crio --version
	I0414 17:26:23.908831  187666 ssh_runner.go:195] Run: crio --version
	I0414 17:26:23.936835  187666 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0414 17:26:23.937948  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetIP
	I0414 17:26:23.940472  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:23.940823  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:23.940852  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:23.941054  187666 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 17:26:23.945116  187666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:26:23.957939  187666 kubeadm.go:883] updating cluster {Name:test-preload-120543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-120543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:26:23.958033  187666 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 17:26:23.958072  187666 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:26:23.992827  187666 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 17:26:23.992952  187666 ssh_runner.go:195] Run: which lz4
	I0414 17:26:23.997015  187666 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:26:24.001279  187666 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:26:24.001301  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0414 17:26:25.475950  187666 crio.go:462] duration metric: took 1.478961798s to copy over tarball
	I0414 17:26:25.476030  187666 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:26:27.803155  187666 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.327097271s)
	I0414 17:26:27.803178  187666 crio.go:469] duration metric: took 2.327201948s to extract the tarball
	I0414 17:26:27.803186  187666 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:26:27.844015  187666 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:26:27.885285  187666 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 17:26:27.885308  187666 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 17:26:27.885370  187666 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 17:26:27.885429  187666 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 17:26:27.885455  187666 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 17:26:27.885473  187666 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 17:26:27.885428  187666 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 17:26:27.885520  187666 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0414 17:26:27.885531  187666 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0414 17:26:27.885370  187666 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:26:27.886845  187666 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 17:26:27.886860  187666 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 17:26:27.886873  187666 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 17:26:27.886875  187666 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0414 17:26:27.886876  187666 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:26:27.886853  187666 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 17:26:27.886912  187666 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0414 17:26:27.886848  187666 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 17:26:28.019836  187666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0414 17:26:28.026529  187666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0414 17:26:28.031504  187666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0414 17:26:28.036873  187666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0414 17:26:28.040234  187666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 17:26:28.044349  187666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0414 17:26:28.062898  187666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0414 17:26:28.138292  187666 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0414 17:26:28.138329  187666 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 17:26:28.138369  187666 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0414 17:26:28.138408  187666 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0414 17:26:28.138377  187666 ssh_runner.go:195] Run: which crictl
	I0414 17:26:28.138449  187666 ssh_runner.go:195] Run: which crictl
	I0414 17:26:28.170945  187666 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0414 17:26:28.170996  187666 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0414 17:26:28.171051  187666 ssh_runner.go:195] Run: which crictl
	I0414 17:26:28.199743  187666 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0414 17:26:28.199775  187666 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0414 17:26:28.199792  187666 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 17:26:28.199807  187666 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 17:26:28.199840  187666 ssh_runner.go:195] Run: which crictl
	I0414 17:26:28.199849  187666 ssh_runner.go:195] Run: which crictl
	I0414 17:26:28.208527  187666 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0414 17:26:28.208553  187666 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 17:26:28.208588  187666 ssh_runner.go:195] Run: which crictl
	I0414 17:26:28.218966  187666 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0414 17:26:28.218998  187666 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 17:26:28.219035  187666 ssh_runner.go:195] Run: which crictl
	I0414 17:26:28.219042  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 17:26:28.219092  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 17:26:28.219115  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 17:26:28.219170  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 17:26:28.219175  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 17:26:28.219190  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 17:26:28.346147  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 17:26:28.346225  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 17:26:28.346256  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 17:26:28.346317  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 17:26:28.346415  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 17:26:28.346447  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 17:26:28.346470  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 17:26:28.494214  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 17:26:28.494322  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 17:26:28.494385  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 17:26:28.494428  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 17:26:28.494536  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 17:26:28.494538  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 17:26:28.494580  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 17:26:28.620849  187666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 17:26:28.633941  187666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0414 17:26:28.634030  187666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 17:26:28.648704  187666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0414 17:26:28.648805  187666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 17:26:28.648820  187666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0414 17:26:28.648855  187666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0414 17:26:28.648903  187666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0414 17:26:28.648917  187666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0414 17:26:28.648926  187666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 17:26:28.648961  187666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0414 17:26:28.648964  187666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0414 17:26:28.649067  187666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0414 17:26:28.684708  187666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0414 17:26:28.684727  187666 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 17:26:28.684771  187666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 17:26:28.684781  187666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0414 17:26:28.684810  187666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0414 17:26:28.684839  187666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0414 17:26:28.684896  187666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0414 17:26:28.684913  187666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0414 17:26:28.684954  187666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0414 17:26:28.684980  187666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 17:26:28.793474  187666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:26:31.541575  187666 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.856576186s)
	I0414 17:26:31.541622  187666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0414 17:26:31.541641  187666 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.856849183s)
	I0414 17:26:31.541660  187666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0414 17:26:31.541667  187666 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.748168943s)
	I0414 17:26:31.541693  187666 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0414 17:26:31.541759  187666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0414 17:26:31.686557  187666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0414 17:26:31.686596  187666 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 17:26:31.686654  187666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 17:26:32.127284  187666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0414 17:26:32.127342  187666 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0414 17:26:32.127399  187666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0414 17:26:34.271734  187666 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.144305935s)
	I0414 17:26:34.271773  187666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0414 17:26:34.271805  187666 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 17:26:34.271858  187666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 17:26:35.021843  187666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0414 17:26:35.021899  187666 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0414 17:26:35.021964  187666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0414 17:26:35.380952  187666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0414 17:26:35.381006  187666 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 17:26:35.381059  187666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 17:26:36.224431  187666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0414 17:26:36.224475  187666 cache_images.go:123] Successfully loaded all cached images
	I0414 17:26:36.224481  187666 cache_images.go:92] duration metric: took 8.339162042s to LoadCachedImages
	I0414 17:26:36.224492  187666 kubeadm.go:934] updating node { 192.168.39.53 8443 v1.24.4 crio true true} ...
	I0414 17:26:36.224598  187666 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-120543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-120543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:26:36.224681  187666 ssh_runner.go:195] Run: crio config
	I0414 17:26:36.272936  187666 cni.go:84] Creating CNI manager for ""
	I0414 17:26:36.272956  187666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:26:36.272966  187666 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:26:36.272982  187666 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-120543 NodeName:test-preload-120543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 17:26:36.273119  187666 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-120543"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:26:36.273178  187666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0414 17:26:36.282812  187666 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:26:36.282909  187666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:26:36.292225  187666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0414 17:26:36.313499  187666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:26:36.329379  187666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0414 17:26:36.345484  187666 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0414 17:26:36.349182  187666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:26:36.361107  187666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:26:36.470713  187666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:26:36.487374  187666 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543 for IP: 192.168.39.53
	I0414 17:26:36.487393  187666 certs.go:194] generating shared ca certs ...
	I0414 17:26:36.487408  187666 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:26:36.487583  187666 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:26:36.487629  187666 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:26:36.487640  187666 certs.go:256] generating profile certs ...
	I0414 17:26:36.487710  187666 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/client.key
	I0414 17:26:36.487767  187666 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/apiserver.key.e17b6875
	I0414 17:26:36.487800  187666 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/proxy-client.key
	I0414 17:26:36.487911  187666 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:26:36.487942  187666 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:26:36.487952  187666 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:26:36.487978  187666 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:26:36.488007  187666 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:26:36.488033  187666 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:26:36.488068  187666 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:26:36.488659  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:26:36.521234  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:26:36.558886  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:26:36.587389  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:26:36.615077  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 17:26:36.650536  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 17:26:36.687433  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:26:36.710064  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 17:26:36.732720  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:26:36.754829  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:26:36.776818  187666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:26:36.799150  187666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:26:36.814879  187666 ssh_runner.go:195] Run: openssl version
	I0414 17:26:36.820468  187666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:26:36.830896  187666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:26:36.835094  187666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:26:36.835132  187666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:26:36.840807  187666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:26:36.851177  187666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:26:36.861383  187666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:26:36.865633  187666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:26:36.865668  187666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:26:36.871333  187666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:26:36.881568  187666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:26:36.891918  187666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:26:36.896191  187666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:26:36.896236  187666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:26:36.901838  187666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:26:36.912198  187666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:26:36.916424  187666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:26:36.922042  187666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:26:36.927532  187666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:26:36.933235  187666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:26:36.938747  187666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:26:36.944249  187666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:26:36.949743  187666 kubeadm.go:392] StartCluster: {Name:test-preload-120543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
120543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:26:36.949843  187666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:26:36.949895  187666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:26:36.990220  187666 cri.go:89] found id: ""
	I0414 17:26:36.990307  187666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:26:37.000458  187666 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 17:26:37.000478  187666 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 17:26:37.000525  187666 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 17:26:37.010182  187666 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:26:37.010635  187666 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-120543" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:26:37.010744  187666 kubeconfig.go:62] /home/jenkins/minikube-integration/20349-149500/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-120543" cluster setting kubeconfig missing "test-preload-120543" context setting]
	I0414 17:26:37.011020  187666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:26:37.011559  187666 kapi.go:59] client config for test-preload-120543: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/client.crt", KeyFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/client.key", CAFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 17:26:37.011938  187666 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 17:26:37.011951  187666 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 17:26:37.011956  187666 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 17:26:37.011962  187666 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 17:26:37.012253  187666 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 17:26:37.021288  187666 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.53
	I0414 17:26:37.021311  187666 kubeadm.go:1160] stopping kube-system containers ...
	I0414 17:26:37.021321  187666 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 17:26:37.021362  187666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:26:37.054921  187666 cri.go:89] found id: ""
	I0414 17:26:37.054987  187666 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:26:37.070865  187666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:26:37.080138  187666 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:26:37.080157  187666 kubeadm.go:157] found existing configuration files:
	
	I0414 17:26:37.080193  187666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:26:37.088900  187666 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:26:37.088950  187666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:26:37.098097  187666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:26:37.106704  187666 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:26:37.106750  187666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:26:37.115792  187666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:26:37.124589  187666 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:26:37.124632  187666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:26:37.133584  187666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:26:37.142279  187666 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:26:37.142318  187666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:26:37.151177  187666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:26:37.160147  187666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:26:37.246813  187666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:26:37.824458  187666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:26:38.080510  187666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:26:38.137912  187666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:26:38.209808  187666 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:26:38.209902  187666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:26:38.710934  187666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:26:39.209946  187666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:26:39.232583  187666 api_server.go:72] duration metric: took 1.02277481s to wait for apiserver process to appear ...
	I0414 17:26:39.232615  187666 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:26:39.232638  187666 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0414 17:26:39.233254  187666 api_server.go:269] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0414 17:26:39.732889  187666 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0414 17:26:39.733454  187666 api_server.go:269] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0414 17:26:40.233094  187666 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0414 17:26:43.072121  187666 api_server.go:279] https://192.168.39.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:26:43.072151  187666 api_server.go:103] status: https://192.168.39.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:26:43.072167  187666 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0414 17:26:43.084628  187666 api_server.go:279] https://192.168.39.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:26:43.084654  187666 api_server.go:103] status: https://192.168.39.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:26:43.232981  187666 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0414 17:26:43.295441  187666 api_server.go:279] https://192.168.39.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:26:43.295478  187666 api_server.go:103] status: https://192.168.39.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:26:43.733039  187666 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0414 17:26:43.738601  187666 api_server.go:279] https://192.168.39.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:26:43.738630  187666 api_server.go:103] status: https://192.168.39.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:26:44.233292  187666 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0414 17:26:44.242512  187666 api_server.go:279] https://192.168.39.53:8443/healthz returned 200:
	ok
	I0414 17:26:44.252039  187666 api_server.go:141] control plane version: v1.24.4
	I0414 17:26:44.252064  187666 api_server.go:131] duration metric: took 5.019440518s to wait for apiserver health ...
	I0414 17:26:44.252081  187666 cni.go:84] Creating CNI manager for ""
	I0414 17:26:44.252089  187666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:26:44.253854  187666 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:26:44.255064  187666 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:26:44.284153  187666 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:26:44.311503  187666 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:26:44.316149  187666 system_pods.go:59] 8 kube-system pods found
	I0414 17:26:44.316177  187666 system_pods.go:61] "coredns-6d4b75cb6d-m6mgj" [d36d14fe-b7d7-4649-a126-4cd681cb3d38] Running
	I0414 17:26:44.316185  187666 system_pods.go:61] "coredns-6d4b75cb6d-wg6pf" [1dc14478-f165-4a14-b4db-de28c2797797] Running
	I0414 17:26:44.316190  187666 system_pods.go:61] "etcd-test-preload-120543" [c1140dd8-2b1d-4b2c-baf3-b3d9fa652a1f] Running
	I0414 17:26:44.316194  187666 system_pods.go:61] "kube-apiserver-test-preload-120543" [d9bad765-96e0-4529-9b62-94b5d3d4915b] Running
	I0414 17:26:44.316203  187666 system_pods.go:61] "kube-controller-manager-test-preload-120543" [39dca4e9-dbc9-4ad0-8f3e-7372ae455527] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 17:26:44.316213  187666 system_pods.go:61] "kube-proxy-f486q" [68975fb3-5454-4caa-9bf3-a703c0d8a65e] Running
	I0414 17:26:44.316222  187666 system_pods.go:61] "kube-scheduler-test-preload-120543" [79bef58f-b5f5-4bad-bbe4-d9a228515349] Running
	I0414 17:26:44.316230  187666 system_pods.go:61] "storage-provisioner" [5fcca2f9-3c7d-42f1-b8f2-21545cf82fbf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 17:26:44.316241  187666 system_pods.go:74] duration metric: took 4.712613ms to wait for pod list to return data ...
	I0414 17:26:44.316255  187666 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:26:44.321485  187666 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:26:44.321508  187666 node_conditions.go:123] node cpu capacity is 2
	I0414 17:26:44.321522  187666 node_conditions.go:105] duration metric: took 5.26201ms to run NodePressure ...
	I0414 17:26:44.321542  187666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:26:44.581321  187666 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 17:26:44.586057  187666 kubeadm.go:739] kubelet initialised
	I0414 17:26:44.586090  187666 kubeadm.go:740] duration metric: took 4.740826ms waiting for restarted kubelet to initialise ...
	I0414 17:26:44.586106  187666 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:26:44.592564  187666 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-m6mgj" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:44.597161  187666 pod_ready.go:98] node "test-preload-120543" hosting pod "coredns-6d4b75cb6d-m6mgj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:44.597182  187666 pod_ready.go:82] duration metric: took 4.592416ms for pod "coredns-6d4b75cb6d-m6mgj" in "kube-system" namespace to be "Ready" ...
	E0414 17:26:44.597193  187666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-120543" hosting pod "coredns-6d4b75cb6d-m6mgj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:44.597201  187666 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wg6pf" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:44.606599  187666 pod_ready.go:98] node "test-preload-120543" hosting pod "coredns-6d4b75cb6d-wg6pf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:44.606619  187666 pod_ready.go:82] duration metric: took 9.407546ms for pod "coredns-6d4b75cb6d-wg6pf" in "kube-system" namespace to be "Ready" ...
	E0414 17:26:44.606630  187666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-120543" hosting pod "coredns-6d4b75cb6d-wg6pf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:44.606636  187666 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:44.615719  187666 pod_ready.go:98] node "test-preload-120543" hosting pod "etcd-test-preload-120543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:44.615735  187666 pod_ready.go:82] duration metric: took 9.090985ms for pod "etcd-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	E0414 17:26:44.615742  187666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-120543" hosting pod "etcd-test-preload-120543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:44.615748  187666 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:44.714705  187666 pod_ready.go:98] node "test-preload-120543" hosting pod "kube-apiserver-test-preload-120543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:44.714734  187666 pod_ready.go:82] duration metric: took 98.973536ms for pod "kube-apiserver-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	E0414 17:26:44.714747  187666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-120543" hosting pod "kube-apiserver-test-preload-120543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:44.714753  187666 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:45.116031  187666 pod_ready.go:98] node "test-preload-120543" hosting pod "kube-controller-manager-test-preload-120543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:45.116060  187666 pod_ready.go:82] duration metric: took 401.298878ms for pod "kube-controller-manager-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	E0414 17:26:45.116082  187666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-120543" hosting pod "kube-controller-manager-test-preload-120543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:45.116092  187666 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f486q" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:45.514318  187666 pod_ready.go:98] node "test-preload-120543" hosting pod "kube-proxy-f486q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:45.514347  187666 pod_ready.go:82] duration metric: took 398.244949ms for pod "kube-proxy-f486q" in "kube-system" namespace to be "Ready" ...
	E0414 17:26:45.514359  187666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-120543" hosting pod "kube-proxy-f486q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:45.514367  187666 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:45.914219  187666 pod_ready.go:98] node "test-preload-120543" hosting pod "kube-scheduler-test-preload-120543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:45.914245  187666 pod_ready.go:82] duration metric: took 399.870793ms for pod "kube-scheduler-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	E0414 17:26:45.914257  187666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-120543" hosting pod "kube-scheduler-test-preload-120543" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:45.914264  187666 pod_ready.go:39] duration metric: took 1.328147234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:26:45.914288  187666 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:26:45.926503  187666 ops.go:34] apiserver oom_adj: -16
	I0414 17:26:45.926520  187666 kubeadm.go:597] duration metric: took 8.926035369s to restartPrimaryControlPlane
	I0414 17:26:45.926528  187666 kubeadm.go:394] duration metric: took 8.976799195s to StartCluster
	I0414 17:26:45.926548  187666 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:26:45.926622  187666 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:26:45.927228  187666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:26:45.927470  187666 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:26:45.927547  187666 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:26:45.927660  187666 config.go:182] Loaded profile config "test-preload-120543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 17:26:45.927658  187666 addons.go:69] Setting storage-provisioner=true in profile "test-preload-120543"
	I0414 17:26:45.927675  187666 addons.go:69] Setting default-storageclass=true in profile "test-preload-120543"
	I0414 17:26:45.927685  187666 addons.go:238] Setting addon storage-provisioner=true in "test-preload-120543"
	W0414 17:26:45.927693  187666 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:26:45.927727  187666 host.go:66] Checking if "test-preload-120543" exists ...
	I0414 17:26:45.927694  187666 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-120543"
	I0414 17:26:45.928157  187666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:26:45.928200  187666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:26:45.928231  187666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:26:45.928263  187666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:26:45.929296  187666 out.go:177] * Verifying Kubernetes components...
	I0414 17:26:45.930546  187666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:26:45.943289  187666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0414 17:26:45.943467  187666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0414 17:26:45.943777  187666 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:26:45.943902  187666 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:26:45.944297  187666 main.go:141] libmachine: Using API Version  1
	I0414 17:26:45.944318  187666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:26:45.944426  187666 main.go:141] libmachine: Using API Version  1
	I0414 17:26:45.944453  187666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:26:45.944657  187666 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:26:45.944771  187666 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:26:45.944844  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetState
	I0414 17:26:45.945329  187666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:26:45.945372  187666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:26:45.947143  187666 kapi.go:59] client config for test-preload-120543: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/client.crt", KeyFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/profiles/test-preload-120543/client.key", CAFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 17:26:45.947588  187666 addons.go:238] Setting addon default-storageclass=true in "test-preload-120543"
	W0414 17:26:45.947612  187666 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:26:45.947646  187666 host.go:66] Checking if "test-preload-120543" exists ...
	I0414 17:26:45.948039  187666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:26:45.948085  187666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:26:45.960696  187666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0414 17:26:45.961209  187666 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:26:45.961653  187666 main.go:141] libmachine: Using API Version  1
	I0414 17:26:45.961675  187666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:26:45.962061  187666 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:26:45.962253  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetState
	I0414 17:26:45.962618  187666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0414 17:26:45.963042  187666 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:26:45.963586  187666 main.go:141] libmachine: Using API Version  1
	I0414 17:26:45.963610  187666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:26:45.963967  187666 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:26:45.964090  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:45.964415  187666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:26:45.964446  187666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:26:45.966081  187666 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:26:45.967553  187666 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:26:45.967574  187666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:26:45.967589  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:45.970375  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:45.970825  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:45.970846  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:45.970996  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:45.971130  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:45.971255  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:45.971372  187666 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/test-preload-120543/id_rsa Username:docker}
	I0414 17:26:46.014493  187666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41763
	I0414 17:26:46.015017  187666 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:26:46.015537  187666 main.go:141] libmachine: Using API Version  1
	I0414 17:26:46.015561  187666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:26:46.015963  187666 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:26:46.016142  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetState
	I0414 17:26:46.017939  187666 main.go:141] libmachine: (test-preload-120543) Calling .DriverName
	I0414 17:26:46.018158  187666 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:26:46.018174  187666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:26:46.018188  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHHostname
	I0414 17:26:46.020981  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:46.021373  187666 main.go:141] libmachine: (test-preload-120543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:bb:0c", ip: ""} in network mk-test-preload-120543: {Iface:virbr1 ExpiryTime:2025-04-14 18:26:13 +0000 UTC Type:0 Mac:52:54:00:98:bb:0c Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-120543 Clientid:01:52:54:00:98:bb:0c}
	I0414 17:26:46.021400  187666 main.go:141] libmachine: (test-preload-120543) DBG | domain test-preload-120543 has defined IP address 192.168.39.53 and MAC address 52:54:00:98:bb:0c in network mk-test-preload-120543
	I0414 17:26:46.021526  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHPort
	I0414 17:26:46.021671  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHKeyPath
	I0414 17:26:46.021813  187666 main.go:141] libmachine: (test-preload-120543) Calling .GetSSHUsername
	I0414 17:26:46.021939  187666 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/test-preload-120543/id_rsa Username:docker}
	I0414 17:26:46.107345  187666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:26:46.126972  187666 node_ready.go:35] waiting up to 6m0s for node "test-preload-120543" to be "Ready" ...
	I0414 17:26:46.188470  187666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:26:46.220867  187666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:26:47.164327  187666 main.go:141] libmachine: Making call to close driver server
	I0414 17:26:47.164352  187666 main.go:141] libmachine: (test-preload-120543) Calling .Close
	I0414 17:26:47.164355  187666 main.go:141] libmachine: Making call to close driver server
	I0414 17:26:47.164377  187666 main.go:141] libmachine: (test-preload-120543) Calling .Close
	I0414 17:26:47.164631  187666 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:26:47.164636  187666 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:26:47.164649  187666 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:26:47.164650  187666 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:26:47.164658  187666 main.go:141] libmachine: Making call to close driver server
	I0414 17:26:47.164666  187666 main.go:141] libmachine: (test-preload-120543) Calling .Close
	I0414 17:26:47.164659  187666 main.go:141] libmachine: Making call to close driver server
	I0414 17:26:47.164725  187666 main.go:141] libmachine: (test-preload-120543) Calling .Close
	I0414 17:26:47.164915  187666 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:26:47.164952  187666 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:26:47.164960  187666 main.go:141] libmachine: (test-preload-120543) DBG | Closing plugin on server side
	I0414 17:26:47.164922  187666 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:26:47.164983  187666 main.go:141] libmachine: (test-preload-120543) DBG | Closing plugin on server side
	I0414 17:26:47.164993  187666 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:26:47.169724  187666 main.go:141] libmachine: Making call to close driver server
	I0414 17:26:47.169736  187666 main.go:141] libmachine: (test-preload-120543) Calling .Close
	I0414 17:26:47.169993  187666 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:26:47.170014  187666 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:26:47.171907  187666 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 17:26:47.173055  187666 addons.go:514] duration metric: took 1.245517943s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 17:26:48.130475  187666 node_ready.go:53] node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:50.130729  187666 node_ready.go:53] node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:52.131640  187666 node_ready.go:53] node "test-preload-120543" has status "Ready":"False"
	I0414 17:26:53.630111  187666 node_ready.go:49] node "test-preload-120543" has status "Ready":"True"
	I0414 17:26:53.630141  187666 node_ready.go:38] duration metric: took 7.503140156s for node "test-preload-120543" to be "Ready" ...
	I0414 17:26:53.630153  187666 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:26:53.633279  187666 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wg6pf" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:53.636665  187666 pod_ready.go:93] pod "coredns-6d4b75cb6d-wg6pf" in "kube-system" namespace has status "Ready":"True"
	I0414 17:26:53.636679  187666 pod_ready.go:82] duration metric: took 3.381951ms for pod "coredns-6d4b75cb6d-wg6pf" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:53.636686  187666 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:53.640215  187666 pod_ready.go:93] pod "etcd-test-preload-120543" in "kube-system" namespace has status "Ready":"True"
	I0414 17:26:53.640243  187666 pod_ready.go:82] duration metric: took 3.550057ms for pod "etcd-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:53.640253  187666 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:53.643294  187666 pod_ready.go:93] pod "kube-apiserver-test-preload-120543" in "kube-system" namespace has status "Ready":"True"
	I0414 17:26:53.643313  187666 pod_ready.go:82] duration metric: took 3.053199ms for pod "kube-apiserver-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:53.643323  187666 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:55.649029  187666 pod_ready.go:103] pod "kube-controller-manager-test-preload-120543" in "kube-system" namespace has status "Ready":"False"
	I0414 17:26:57.649241  187666 pod_ready.go:93] pod "kube-controller-manager-test-preload-120543" in "kube-system" namespace has status "Ready":"True"
	I0414 17:26:57.649272  187666 pod_ready.go:82] duration metric: took 4.005940603s for pod "kube-controller-manager-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:57.649286  187666 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f486q" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:57.652821  187666 pod_ready.go:93] pod "kube-proxy-f486q" in "kube-system" namespace has status "Ready":"True"
	I0414 17:26:57.652854  187666 pod_ready.go:82] duration metric: took 3.559664ms for pod "kube-proxy-f486q" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:57.652865  187666 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:57.657937  187666 pod_ready.go:93] pod "kube-scheduler-test-preload-120543" in "kube-system" namespace has status "Ready":"True"
	I0414 17:26:57.657952  187666 pod_ready.go:82] duration metric: took 5.080369ms for pod "kube-scheduler-test-preload-120543" in "kube-system" namespace to be "Ready" ...
	I0414 17:26:57.657959  187666 pod_ready.go:39] duration metric: took 4.027792728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:26:57.657973  187666 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:26:57.658026  187666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:26:57.672456  187666 api_server.go:72] duration metric: took 11.744954628s to wait for apiserver process to appear ...
	I0414 17:26:57.672480  187666 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:26:57.672497  187666 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0414 17:26:57.677016  187666 api_server.go:279] https://192.168.39.53:8443/healthz returned 200:
	ok
	I0414 17:26:57.677702  187666 api_server.go:141] control plane version: v1.24.4
	I0414 17:26:57.677718  187666 api_server.go:131] duration metric: took 5.231609ms to wait for apiserver health ...
	I0414 17:26:57.677724  187666 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:26:57.831637  187666 system_pods.go:59] 7 kube-system pods found
	I0414 17:26:57.831666  187666 system_pods.go:61] "coredns-6d4b75cb6d-wg6pf" [1dc14478-f165-4a14-b4db-de28c2797797] Running
	I0414 17:26:57.831671  187666 system_pods.go:61] "etcd-test-preload-120543" [c1140dd8-2b1d-4b2c-baf3-b3d9fa652a1f] Running
	I0414 17:26:57.831675  187666 system_pods.go:61] "kube-apiserver-test-preload-120543" [d9bad765-96e0-4529-9b62-94b5d3d4915b] Running
	I0414 17:26:57.831679  187666 system_pods.go:61] "kube-controller-manager-test-preload-120543" [39dca4e9-dbc9-4ad0-8f3e-7372ae455527] Running
	I0414 17:26:57.831682  187666 system_pods.go:61] "kube-proxy-f486q" [68975fb3-5454-4caa-9bf3-a703c0d8a65e] Running
	I0414 17:26:57.831685  187666 system_pods.go:61] "kube-scheduler-test-preload-120543" [79bef58f-b5f5-4bad-bbe4-d9a228515349] Running
	I0414 17:26:57.831688  187666 system_pods.go:61] "storage-provisioner" [5fcca2f9-3c7d-42f1-b8f2-21545cf82fbf] Running
	I0414 17:26:57.831695  187666 system_pods.go:74] duration metric: took 153.965503ms to wait for pod list to return data ...
	I0414 17:26:57.831701  187666 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:26:58.030301  187666 default_sa.go:45] found service account: "default"
	I0414 17:26:58.030328  187666 default_sa.go:55] duration metric: took 198.620423ms for default service account to be created ...
	I0414 17:26:58.030337  187666 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:26:58.232766  187666 system_pods.go:86] 7 kube-system pods found
	I0414 17:26:58.232804  187666 system_pods.go:89] "coredns-6d4b75cb6d-wg6pf" [1dc14478-f165-4a14-b4db-de28c2797797] Running
	I0414 17:26:58.232812  187666 system_pods.go:89] "etcd-test-preload-120543" [c1140dd8-2b1d-4b2c-baf3-b3d9fa652a1f] Running
	I0414 17:26:58.232815  187666 system_pods.go:89] "kube-apiserver-test-preload-120543" [d9bad765-96e0-4529-9b62-94b5d3d4915b] Running
	I0414 17:26:58.232821  187666 system_pods.go:89] "kube-controller-manager-test-preload-120543" [39dca4e9-dbc9-4ad0-8f3e-7372ae455527] Running
	I0414 17:26:58.232825  187666 system_pods.go:89] "kube-proxy-f486q" [68975fb3-5454-4caa-9bf3-a703c0d8a65e] Running
	I0414 17:26:58.232831  187666 system_pods.go:89] "kube-scheduler-test-preload-120543" [79bef58f-b5f5-4bad-bbe4-d9a228515349] Running
	I0414 17:26:58.232836  187666 system_pods.go:89] "storage-provisioner" [5fcca2f9-3c7d-42f1-b8f2-21545cf82fbf] Running
	I0414 17:26:58.232846  187666 system_pods.go:126] duration metric: took 202.501901ms to wait for k8s-apps to be running ...
	I0414 17:26:58.232870  187666 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:26:58.232920  187666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:26:58.248607  187666 system_svc.go:56] duration metric: took 15.730724ms WaitForService to wait for kubelet
	I0414 17:26:58.248631  187666 kubeadm.go:582] duration metric: took 12.321133186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:26:58.248655  187666 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:26:58.430786  187666 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:26:58.430810  187666 node_conditions.go:123] node cpu capacity is 2
	I0414 17:26:58.430831  187666 node_conditions.go:105] duration metric: took 182.164131ms to run NodePressure ...
	I0414 17:26:58.430842  187666 start.go:241] waiting for startup goroutines ...
	I0414 17:26:58.430851  187666 start.go:246] waiting for cluster config update ...
	I0414 17:26:58.430863  187666 start.go:255] writing updated cluster config ...
	I0414 17:26:58.431129  187666 ssh_runner.go:195] Run: rm -f paused
	I0414 17:26:58.476145  187666 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0414 17:26:58.478073  187666 out.go:201] 
	W0414 17:26:58.479280  187666 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0414 17:26:58.480417  187666 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0414 17:26:58.481608  187666 out.go:177] * Done! kubectl is now configured to use "test-preload-120543" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.319823155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=025e2c70-79e1-4b29-9a2b-4f400b538cc5 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.320973424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a719c686-9dc7-4f81-836f-9749a1fbae07 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.321549245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744651619321526969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a719c686-9dc7-4f81-836f-9749a1fbae07 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.321992476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=862676ae-c28b-4a77-b977-8906941c2cfe name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.322066082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=862676ae-c28b-4a77-b977-8906941c2cfe name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.322260624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:947546f5ff9db25d04d06116e71ee566489f1719aa3a1d82731ff52f5ef05498,PodSandboxId:4985cf076c7baaf846f9439e27add61201e78252c5d86d16a522cd0d9eae772a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744651611316350189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wg6pf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc14478-f165-4a14-b4db-de28c2797797,},Annotations:map[string]string{io.kubernetes.container.hash: 2c00456a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fcfec4b1d377e71f4df2c1c29c64ab537c13a525859d21aa2f2dd0cf421766,PodSandboxId:b93746e2924b2ac171207af03d611e0478cc466a174c5ca4d50782dfadafde5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744651604254359999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f486q,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68975fb3-5454-4caa-9bf3-a703c0d8a65e,},Annotations:map[string]string{io.kubernetes.container.hash: fc7b5541,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e5845ec0fd064eae416594f28387f2888cb7c482958ef6463e05152f0bd9cf,PodSandboxId:a8c52246ab6edc8b2e7796bba7b6391273900b8f49ea90c479c7a81089d5afdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744651603894339293,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
cca2f9-3c7d-42f1-b8f2-21545cf82fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5feafbac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b49b7c686fc4d7c2d22169b2822ac5944354c8c6c17a383e662d4dbc515c35f0,PodSandboxId:fd58fb70a0afca22d42e8c549858de0734010c628ec7377a5ede0ffc3e0ba63a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744651598957726318,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 8604203172610863e0898c8c6cbc18b3,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e43b38356f184507b25a88073e97915a121f363524fd71ee9b3f9609333e9d,PodSandboxId:4f9c85732a6dede7c5026e653974f42efab56dd15e4f1b9b6c91b73236268b3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744651598977509954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: d5eed08ea7e635a10990eda28e218b81,},Annotations:map[string]string{io.kubernetes.container.hash: 98f065c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b374bdd4989d4c7825705e9689c47f80d255f5084f6fca1050aec00b1d94547,PodSandboxId:18174a2514265e153af7ea764c5eab26f0cd981156e5313a2d24f55f5483b5e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744651598889642800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc5818b4b0c06cf74c109c012546c0b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 580b9138,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e41c6e29da46be1f7f55e52678f852a309f1394a3d4d81fa95cacd9a6917a,PodSandboxId:1cb30c73dfeaddf730422993d8deca4eb43d1394106709bf34934ddd78078d1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744651598859764017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f39008e2a94395177576f767cf8029,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=862676ae-c28b-4a77-b977-8906941c2cfe name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.357155596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7187adc-4502-4dd2-9999-a4dd769f4817 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.357217986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7187adc-4502-4dd2-9999-a4dd769f4817 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.359697774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd97fc22-2d0b-4c71-8ab6-a0dff63aa986 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.360185685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744651619360163883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd97fc22-2d0b-4c71-8ab6-a0dff63aa986 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.360748582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed4a9907-9805-4210-9ce3-9a094060beab name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.360825249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed4a9907-9805-4210-9ce3-9a094060beab name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.360987419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:947546f5ff9db25d04d06116e71ee566489f1719aa3a1d82731ff52f5ef05498,PodSandboxId:4985cf076c7baaf846f9439e27add61201e78252c5d86d16a522cd0d9eae772a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744651611316350189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wg6pf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc14478-f165-4a14-b4db-de28c2797797,},Annotations:map[string]string{io.kubernetes.container.hash: 2c00456a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fcfec4b1d377e71f4df2c1c29c64ab537c13a525859d21aa2f2dd0cf421766,PodSandboxId:b93746e2924b2ac171207af03d611e0478cc466a174c5ca4d50782dfadafde5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744651604254359999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f486q,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68975fb3-5454-4caa-9bf3-a703c0d8a65e,},Annotations:map[string]string{io.kubernetes.container.hash: fc7b5541,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e5845ec0fd064eae416594f28387f2888cb7c482958ef6463e05152f0bd9cf,PodSandboxId:a8c52246ab6edc8b2e7796bba7b6391273900b8f49ea90c479c7a81089d5afdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744651603894339293,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
cca2f9-3c7d-42f1-b8f2-21545cf82fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5feafbac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b49b7c686fc4d7c2d22169b2822ac5944354c8c6c17a383e662d4dbc515c35f0,PodSandboxId:fd58fb70a0afca22d42e8c549858de0734010c628ec7377a5ede0ffc3e0ba63a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744651598957726318,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 8604203172610863e0898c8c6cbc18b3,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e43b38356f184507b25a88073e97915a121f363524fd71ee9b3f9609333e9d,PodSandboxId:4f9c85732a6dede7c5026e653974f42efab56dd15e4f1b9b6c91b73236268b3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744651598977509954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: d5eed08ea7e635a10990eda28e218b81,},Annotations:map[string]string{io.kubernetes.container.hash: 98f065c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b374bdd4989d4c7825705e9689c47f80d255f5084f6fca1050aec00b1d94547,PodSandboxId:18174a2514265e153af7ea764c5eab26f0cd981156e5313a2d24f55f5483b5e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744651598889642800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc5818b4b0c06cf74c109c012546c0b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 580b9138,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e41c6e29da46be1f7f55e52678f852a309f1394a3d4d81fa95cacd9a6917a,PodSandboxId:1cb30c73dfeaddf730422993d8deca4eb43d1394106709bf34934ddd78078d1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744651598859764017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f39008e2a94395177576f767cf8029,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed4a9907-9805-4210-9ce3-9a094060beab name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.373485305Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5da81c1f-6bab-43e2-a8a4-b5209a8a0780 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.373688307Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4985cf076c7baaf846f9439e27add61201e78252c5d86d16a522cd0d9eae772a,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-wg6pf,Uid:1dc14478-f165-4a14-b4db-de28c2797797,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744651611088276595,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-wg6pf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc14478-f165-4a14-b4db-de28c2797797,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T17:26:43.185395444Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b93746e2924b2ac171207af03d611e0478cc466a174c5ca4d50782dfadafde5a,Metadata:&PodSandboxMetadata{Name:kube-proxy-f486q,Uid:68975fb3-5454-4caa-9bf3-a703c0d8a65e,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1744651604097664036,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f486q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68975fb3-5454-4caa-9bf3-a703c0d8a65e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T17:26:43.185391107Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8c52246ab6edc8b2e7796bba7b6391273900b8f49ea90c479c7a81089d5afdb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5fcca2f9-3c7d-42f1-b8f2-21545cf82fbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744651603801370684,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcca2f9-3c7d-42f1-b8f2-2154
5cf82fbf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-04-14T17:26:43.185393441Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1cb30c73dfeaddf730422993d8deca4eb43d1394106709bf34934ddd78078d1d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-120543,Uid:20f3900
8e2a94395177576f767cf8029,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744651598727742300,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f39008e2a94395177576f767cf8029,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 20f39008e2a94395177576f767cf8029,kubernetes.io/config.seen: 2025-04-14T17:26:38.176408769Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fd58fb70a0afca22d42e8c549858de0734010c628ec7377a5ede0ffc3e0ba63a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-120543,Uid:8604203172610863e0898c8c6cbc18b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744651598725691410,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-120543,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8604203172610863e0898c8c6cbc18b3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8604203172610863e0898c8c6cbc18b3,kubernetes.io/config.seen: 2025-04-14T17:26:38.176407715Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4f9c85732a6dede7c5026e653974f42efab56dd15e4f1b9b6c91b73236268b3f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-120543,Uid:d5eed08ea7e635a10990eda28e218b81,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744651598720258150,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5eed08ea7e635a10990eda28e218b81,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.53:8443,kubernetes.io/config.hash: d5eed08ea7e635a10990eda28e218b81,kube
rnetes.io/config.seen: 2025-04-14T17:26:38.176375812Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18174a2514265e153af7ea764c5eab26f0cd981156e5313a2d24f55f5483b5e2,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-120543,Uid:bfc5818b4b0c06cf74c109c012546c0b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744651598704541299,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc5818b4b0c06cf74c109c012546c0b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.53:2379,kubernetes.io/config.hash: bfc5818b4b0c06cf74c109c012546c0b,kubernetes.io/config.seen: 2025-04-14T17:26:38.181351032Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5da81c1f-6bab-43e2-a8a4-b5209a8a0780 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.374360672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22ba9389-c5ba-4d22-9bf2-4f311266c92c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.374428304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22ba9389-c5ba-4d22-9bf2-4f311266c92c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.374600655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:947546f5ff9db25d04d06116e71ee566489f1719aa3a1d82731ff52f5ef05498,PodSandboxId:4985cf076c7baaf846f9439e27add61201e78252c5d86d16a522cd0d9eae772a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744651611316350189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wg6pf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc14478-f165-4a14-b4db-de28c2797797,},Annotations:map[string]string{io.kubernetes.container.hash: 2c00456a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fcfec4b1d377e71f4df2c1c29c64ab537c13a525859d21aa2f2dd0cf421766,PodSandboxId:b93746e2924b2ac171207af03d611e0478cc466a174c5ca4d50782dfadafde5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744651604254359999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f486q,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68975fb3-5454-4caa-9bf3-a703c0d8a65e,},Annotations:map[string]string{io.kubernetes.container.hash: fc7b5541,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e5845ec0fd064eae416594f28387f2888cb7c482958ef6463e05152f0bd9cf,PodSandboxId:a8c52246ab6edc8b2e7796bba7b6391273900b8f49ea90c479c7a81089d5afdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744651603894339293,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
cca2f9-3c7d-42f1-b8f2-21545cf82fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5feafbac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b49b7c686fc4d7c2d22169b2822ac5944354c8c6c17a383e662d4dbc515c35f0,PodSandboxId:fd58fb70a0afca22d42e8c549858de0734010c628ec7377a5ede0ffc3e0ba63a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744651598957726318,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 8604203172610863e0898c8c6cbc18b3,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e43b38356f184507b25a88073e97915a121f363524fd71ee9b3f9609333e9d,PodSandboxId:4f9c85732a6dede7c5026e653974f42efab56dd15e4f1b9b6c91b73236268b3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744651598977509954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: d5eed08ea7e635a10990eda28e218b81,},Annotations:map[string]string{io.kubernetes.container.hash: 98f065c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b374bdd4989d4c7825705e9689c47f80d255f5084f6fca1050aec00b1d94547,PodSandboxId:18174a2514265e153af7ea764c5eab26f0cd981156e5313a2d24f55f5483b5e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744651598889642800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc5818b4b0c06cf74c109c012546c0b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 580b9138,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e41c6e29da46be1f7f55e52678f852a309f1394a3d4d81fa95cacd9a6917a,PodSandboxId:1cb30c73dfeaddf730422993d8deca4eb43d1394106709bf34934ddd78078d1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744651598859764017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f39008e2a94395177576f767cf8029,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22ba9389-c5ba-4d22-9bf2-4f311266c92c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.394361592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b663d39-dddf-4175-99ec-56dad2d3527e name=/runtime.v1.RuntimeService/Version
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.394510478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b663d39-dddf-4175-99ec-56dad2d3527e name=/runtime.v1.RuntimeService/Version
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.396023398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6298515-1d1c-403d-a7be-bb5d18ccf458 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.396471225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744651619396449489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6298515-1d1c-403d-a7be-bb5d18ccf458 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.397163690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a869c80-a3a4-49a3-bc36-c104754e7746 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.397226635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a869c80-a3a4-49a3-bc36-c104754e7746 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:26:59 test-preload-120543 crio[690]: time="2025-04-14 17:26:59.397392281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:947546f5ff9db25d04d06116e71ee566489f1719aa3a1d82731ff52f5ef05498,PodSandboxId:4985cf076c7baaf846f9439e27add61201e78252c5d86d16a522cd0d9eae772a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744651611316350189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wg6pf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc14478-f165-4a14-b4db-de28c2797797,},Annotations:map[string]string{io.kubernetes.container.hash: 2c00456a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fcfec4b1d377e71f4df2c1c29c64ab537c13a525859d21aa2f2dd0cf421766,PodSandboxId:b93746e2924b2ac171207af03d611e0478cc466a174c5ca4d50782dfadafde5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744651604254359999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f486q,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68975fb3-5454-4caa-9bf3-a703c0d8a65e,},Annotations:map[string]string{io.kubernetes.container.hash: fc7b5541,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e5845ec0fd064eae416594f28387f2888cb7c482958ef6463e05152f0bd9cf,PodSandboxId:a8c52246ab6edc8b2e7796bba7b6391273900b8f49ea90c479c7a81089d5afdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744651603894339293,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
cca2f9-3c7d-42f1-b8f2-21545cf82fbf,},Annotations:map[string]string{io.kubernetes.container.hash: 5feafbac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b49b7c686fc4d7c2d22169b2822ac5944354c8c6c17a383e662d4dbc515c35f0,PodSandboxId:fd58fb70a0afca22d42e8c549858de0734010c628ec7377a5ede0ffc3e0ba63a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744651598957726318,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 8604203172610863e0898c8c6cbc18b3,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23e43b38356f184507b25a88073e97915a121f363524fd71ee9b3f9609333e9d,PodSandboxId:4f9c85732a6dede7c5026e653974f42efab56dd15e4f1b9b6c91b73236268b3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744651598977509954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: d5eed08ea7e635a10990eda28e218b81,},Annotations:map[string]string{io.kubernetes.container.hash: 98f065c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b374bdd4989d4c7825705e9689c47f80d255f5084f6fca1050aec00b1d94547,PodSandboxId:18174a2514265e153af7ea764c5eab26f0cd981156e5313a2d24f55f5483b5e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744651598889642800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfc5818b4b0c06cf74c109c012546c0b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 580b9138,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e41c6e29da46be1f7f55e52678f852a309f1394a3d4d81fa95cacd9a6917a,PodSandboxId:1cb30c73dfeaddf730422993d8deca4eb43d1394106709bf34934ddd78078d1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744651598859764017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-120543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20f39008e2a94395177576f767cf8029,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a869c80-a3a4-49a3-bc36-c104754e7746 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	947546f5ff9db       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   4985cf076c7ba       coredns-6d4b75cb6d-wg6pf
	69fcfec4b1d37       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   b93746e2924b2       kube-proxy-f486q
	16e5845ec0fd0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   a8c52246ab6ed       storage-provisioner
	23e43b38356f1       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   4f9c85732a6de       kube-apiserver-test-preload-120543
	b49b7c686fc4d       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   fd58fb70a0afc       kube-controller-manager-test-preload-120543
	0b374bdd4989d       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   18174a2514265       etcd-test-preload-120543
	aa3e41c6e29da       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   1cb30c73dfead       kube-scheduler-test-preload-120543
	
	
	==> coredns [947546f5ff9db25d04d06116e71ee566489f1719aa3a1d82731ff52f5ef05498] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:41994 - 14305 "HINFO IN 6090070049127848534.1763219533607621083. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023557361s
	
	
	==> describe nodes <==
	Name:               test-preload-120543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-120543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37
	                    minikube.k8s.io/name=test-preload-120543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T17_25_29_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 17:25:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-120543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 17:26:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 17:26:53 +0000   Mon, 14 Apr 2025 17:25:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 17:26:53 +0000   Mon, 14 Apr 2025 17:25:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 17:26:53 +0000   Mon, 14 Apr 2025 17:25:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 17:26:53 +0000   Mon, 14 Apr 2025 17:26:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    test-preload-120543
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6073bc87d12344369c3707871fa00dc3
	  System UUID:                6073bc87-d123-4436-9c37-07871fa00dc3
	  Boot ID:                    067b2f65-2b85-4806-b5ac-bb788ce81c92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-wg6pf                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     77s
	  kube-system                 etcd-test-preload-120543                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         91s
	  kube-system                 kube-apiserver-test-preload-120543             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-test-preload-120543    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-f486q                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-test-preload-120543             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node test-preload-120543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node test-preload-120543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s                kubelet          Node test-preload-120543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                kubelet          Node test-preload-120543 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node test-preload-120543 event: Registered Node test-preload-120543 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-120543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-120543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-120543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-120543 event: Registered Node test-preload-120543 in Controller
	
	
	==> dmesg <==
	[Apr14 17:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050040] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039180] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.902844] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.512700] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607284] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.103950] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.058587] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063610] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.170250] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.134845] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.267638] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[ +12.733238] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.054858] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.546392] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +5.871430] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.102922] systemd-fstab-generator[1795]: Ignoring "noauto" option for root device
	[  +5.162232] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [0b374bdd4989d4c7825705e9689c47f80d255f5084f6fca1050aec00b1d94547] <==
	{"level":"info","ts":"2025-04-14T17:26:39.247Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8389b8f6c4f004d4","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-14T17:26:39.247Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-14T17:26:39.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 switched to configuration voters=(9478310260783449300)"}
	{"level":"info","ts":"2025-04-14T17:26:39.250Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1138cde6dcc1ce27","local-member-id":"8389b8f6c4f004d4","added-peer-id":"8389b8f6c4f004d4","added-peer-peer-urls":["https://192.168.39.53:2380"]}
	{"level":"info","ts":"2025-04-14T17:26:39.250Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1138cde6dcc1ce27","local-member-id":"8389b8f6c4f004d4","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T17:26:39.250Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T17:26:39.256Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T17:26:39.257Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8389b8f6c4f004d4","initial-advertise-peer-urls":["https://192.168.39.53:2380"],"listen-peer-urls":["https://192.168.39.53:2380"],"advertise-client-urls":["https://192.168.39.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T17:26:39.257Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T17:26:39.261Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.53:2380"}
	{"level":"info","ts":"2025-04-14T17:26:39.261Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.53:2380"}
	{"level":"info","ts":"2025-04-14T17:26:40.723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T17:26:40.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T17:26:40.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 received MsgPreVoteResp from 8389b8f6c4f004d4 at term 2"}
	{"level":"info","ts":"2025-04-14T17:26:40.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T17:26:40.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 received MsgVoteResp from 8389b8f6c4f004d4 at term 3"}
	{"level":"info","ts":"2025-04-14T17:26:40.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became leader at term 3"}
	{"level":"info","ts":"2025-04-14T17:26:40.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8389b8f6c4f004d4 elected leader 8389b8f6c4f004d4 at term 3"}
	{"level":"info","ts":"2025-04-14T17:26:40.724Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8389b8f6c4f004d4","local-member-attributes":"{Name:test-preload-120543 ClientURLs:[https://192.168.39.53:2379]}","request-path":"/0/members/8389b8f6c4f004d4/attributes","cluster-id":"1138cde6dcc1ce27","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T17:26:40.724Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T17:26:40.726Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.53:2379"}
	{"level":"info","ts":"2025-04-14T17:26:40.726Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T17:26:40.727Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T17:26:40.727Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T17:26:40.727Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:26:59 up 0 min,  0 users,  load average: 0.74, 0.22, 0.08
	Linux test-preload-120543 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [23e43b38356f184507b25a88073e97915a121f363524fd71ee9b3f9609333e9d] <==
	I0414 17:26:43.011885       1 controller.go:85] Starting OpenAPI V3 controller
	I0414 17:26:43.012005       1 naming_controller.go:291] Starting NamingConditionController
	I0414 17:26:43.012147       1 establishing_controller.go:76] Starting EstablishingController
	I0414 17:26:43.012245       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0414 17:26:43.012277       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0414 17:26:43.012363       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0414 17:26:43.083499       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0414 17:26:43.088304       1 cache.go:39] Caches are synced for autoregister controller
	I0414 17:26:43.088471       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 17:26:43.090991       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0414 17:26:43.105707       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0414 17:26:43.107144       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	E0414 17:26:43.111285       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0414 17:26:43.118581       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0414 17:26:43.157739       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 17:26:43.683884       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0414 17:26:43.992361       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 17:26:44.445644       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0414 17:26:44.460597       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0414 17:26:44.509744       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0414 17:26:44.531486       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 17:26:44.538987       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 17:26:44.627597       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0414 17:26:55.820211       1 controller.go:611] quota admission added evaluator for: endpoints
	I0414 17:26:55.844871       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b49b7c686fc4d7c2d22169b2822ac5944354c8c6c17a383e662d4dbc515c35f0] <==
	I0414 17:26:55.803906       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0414 17:26:55.804123       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-120543. Assuming now as a timestamp.
	I0414 17:26:55.804172       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0414 17:26:55.804323       1 event.go:294] "Event occurred" object="test-preload-120543" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-120543 event: Registered Node test-preload-120543 in Controller"
	I0414 17:26:55.808131       1 shared_informer.go:262] Caches are synced for crt configmap
	I0414 17:26:55.811508       1 shared_informer.go:262] Caches are synced for endpoint
	I0414 17:26:55.817214       1 shared_informer.go:262] Caches are synced for job
	I0414 17:26:55.817307       1 shared_informer.go:262] Caches are synced for PVC protection
	I0414 17:26:55.817341       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0414 17:26:55.823521       1 shared_informer.go:262] Caches are synced for deployment
	I0414 17:26:55.830980       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0414 17:26:55.837254       1 shared_informer.go:262] Caches are synced for daemon sets
	I0414 17:26:55.841569       1 shared_informer.go:262] Caches are synced for stateful set
	I0414 17:26:55.889558       1 shared_informer.go:262] Caches are synced for PV protection
	I0414 17:26:55.900683       1 shared_informer.go:262] Caches are synced for HPA
	I0414 17:26:55.906322       1 shared_informer.go:262] Caches are synced for expand
	I0414 17:26:55.917152       1 shared_informer.go:262] Caches are synced for attach detach
	I0414 17:26:55.929338       1 shared_informer.go:262] Caches are synced for persistent volume
	I0414 17:26:55.981479       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 17:26:56.018700       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 17:26:56.067391       1 shared_informer.go:262] Caches are synced for disruption
	I0414 17:26:56.067939       1 disruption.go:371] Sending events to api server.
	I0414 17:26:56.461480       1 shared_informer.go:262] Caches are synced for garbage collector
	I0414 17:26:56.504262       1 shared_informer.go:262] Caches are synced for garbage collector
	I0414 17:26:56.504346       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [69fcfec4b1d377e71f4df2c1c29c64ab537c13a525859d21aa2f2dd0cf421766] <==
	I0414 17:26:44.572856       1 node.go:163] Successfully retrieved node IP: 192.168.39.53
	I0414 17:26:44.573009       1 server_others.go:138] "Detected node IP" address="192.168.39.53"
	I0414 17:26:44.574321       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0414 17:26:44.616874       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0414 17:26:44.616906       1 server_others.go:206] "Using iptables Proxier"
	I0414 17:26:44.616971       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0414 17:26:44.617683       1 server.go:661] "Version info" version="v1.24.4"
	I0414 17:26:44.617712       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 17:26:44.619140       1 config.go:317] "Starting service config controller"
	I0414 17:26:44.619168       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0414 17:26:44.619190       1 config.go:226] "Starting endpoint slice config controller"
	I0414 17:26:44.619194       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0414 17:26:44.621326       1 config.go:444] "Starting node config controller"
	I0414 17:26:44.621351       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0414 17:26:44.719902       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0414 17:26:44.719956       1 shared_informer.go:262] Caches are synced for service config
	I0414 17:26:44.721531       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [aa3e41c6e29da46be1f7f55e52678f852a309f1394a3d4d81fa95cacd9a6917a] <==
	I0414 17:26:39.698043       1 serving.go:348] Generated self-signed cert in-memory
	I0414 17:26:43.145064       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0414 17:26:43.146448       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 17:26:43.170278       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0414 17:26:43.170459       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0414 17:26:43.170603       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 17:26:43.170638       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 17:26:43.170672       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0414 17:26:43.170693       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0414 17:26:43.172131       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0414 17:26:43.172240       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0414 17:26:43.273306       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0414 17:26:43.273645       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0414 17:26:43.275680       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.237127    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcj59\" (UniqueName: \"kubernetes.io/projected/5fcca2f9-3c7d-42f1-b8f2-21545cf82fbf-kube-api-access-xcj59\") pod \"storage-provisioner\" (UID: \"5fcca2f9-3c7d-42f1-b8f2-21545cf82fbf\") " pod="kube-system/storage-provisioner"
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.237155    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume\") pod \"coredns-6d4b75cb6d-wg6pf\" (UID: \"1dc14478-f165-4a14-b4db-de28c2797797\") " pod="kube-system/coredns-6d4b75cb6d-wg6pf"
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.237180    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5295\" (UniqueName: \"kubernetes.io/projected/1dc14478-f165-4a14-b4db-de28c2797797-kube-api-access-g5295\") pod \"coredns-6d4b75cb6d-wg6pf\" (UID: \"1dc14478-f165-4a14-b4db-de28c2797797\") " pod="kube-system/coredns-6d4b75cb6d-wg6pf"
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.237198    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68975fb3-5454-4caa-9bf3-a703c0d8a65e-xtables-lock\") pod \"kube-proxy-f486q\" (UID: \"68975fb3-5454-4caa-9bf3-a703c0d8a65e\") " pod="kube-system/kube-proxy-f486q"
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.237225    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgwr6\" (UniqueName: \"kubernetes.io/projected/68975fb3-5454-4caa-9bf3-a703c0d8a65e-kube-api-access-cgwr6\") pod \"kube-proxy-f486q\" (UID: \"68975fb3-5454-4caa-9bf3-a703c0d8a65e\") " pod="kube-system/kube-proxy-f486q"
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.237237    1143 reconciler.go:159] "Reconciler: start to sync state"
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.366837    1143 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74mfh\" (UniqueName: \"kubernetes.io/projected/d36d14fe-b7d7-4649-a126-4cd681cb3d38-kube-api-access-74mfh\") pod \"d36d14fe-b7d7-4649-a126-4cd681cb3d38\" (UID: \"d36d14fe-b7d7-4649-a126-4cd681cb3d38\") "
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.366981    1143 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d36d14fe-b7d7-4649-a126-4cd681cb3d38-config-volume\") pod \"d36d14fe-b7d7-4649-a126-4cd681cb3d38\" (UID: \"d36d14fe-b7d7-4649-a126-4cd681cb3d38\") "
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: W0414 17:26:43.367865    1143 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/d36d14fe-b7d7-4649-a126-4cd681cb3d38/volumes/kubernetes.io~projected/kube-api-access-74mfh: clearQuota called, but quotas disabled
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: W0414 17:26:43.367897    1143 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/d36d14fe-b7d7-4649-a126-4cd681cb3d38/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: E0414 17:26:43.368529    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: E0414 17:26:43.368619    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume podName:1dc14478-f165-4a14-b4db-de28c2797797 nodeName:}" failed. No retries permitted until 2025-04-14 17:26:43.868589706 +0000 UTC m=+5.821617922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume") pod "coredns-6d4b75cb6d-wg6pf" (UID: "1dc14478-f165-4a14-b4db-de28c2797797") : object "kube-system"/"coredns" not registered
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.368770    1143 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36d14fe-b7d7-4649-a126-4cd681cb3d38-config-volume" (OuterVolumeSpecName: "config-volume") pod "d36d14fe-b7d7-4649-a126-4cd681cb3d38" (UID: "d36d14fe-b7d7-4649-a126-4cd681cb3d38"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.368069    1143 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36d14fe-b7d7-4649-a126-4cd681cb3d38-kube-api-access-74mfh" (OuterVolumeSpecName: "kube-api-access-74mfh") pod "d36d14fe-b7d7-4649-a126-4cd681cb3d38" (UID: "d36d14fe-b7d7-4649-a126-4cd681cb3d38"). InnerVolumeSpecName "kube-api-access-74mfh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.468454    1143 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d36d14fe-b7d7-4649-a126-4cd681cb3d38-config-volume\") on node \"test-preload-120543\" DevicePath \"\""
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: I0414 17:26:43.468503    1143 reconciler.go:384] "Volume detached for volume \"kube-api-access-74mfh\" (UniqueName: \"kubernetes.io/projected/d36d14fe-b7d7-4649-a126-4cd681cb3d38-kube-api-access-74mfh\") on node \"test-preload-120543\" DevicePath \"\""
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: E0414 17:26:43.871501    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 17:26:43 test-preload-120543 kubelet[1143]: E0414 17:26:43.871576    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume podName:1dc14478-f165-4a14-b4db-de28c2797797 nodeName:}" failed. No retries permitted until 2025-04-14 17:26:44.871562215 +0000 UTC m=+6.824590417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume") pod "coredns-6d4b75cb6d-wg6pf" (UID: "1dc14478-f165-4a14-b4db-de28c2797797") : object "kube-system"/"coredns" not registered
	Apr 14 17:26:44 test-preload-120543 kubelet[1143]: E0414 17:26:44.879483    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 17:26:44 test-preload-120543 kubelet[1143]: E0414 17:26:44.879582    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume podName:1dc14478-f165-4a14-b4db-de28c2797797 nodeName:}" failed. No retries permitted until 2025-04-14 17:26:46.879567485 +0000 UTC m=+8.832595701 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume") pod "coredns-6d4b75cb6d-wg6pf" (UID: "1dc14478-f165-4a14-b4db-de28c2797797") : object "kube-system"/"coredns" not registered
	Apr 14 17:26:45 test-preload-120543 kubelet[1143]: E0414 17:26:45.277497    1143 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wg6pf" podUID=1dc14478-f165-4a14-b4db-de28c2797797
	Apr 14 17:26:46 test-preload-120543 kubelet[1143]: I0414 17:26:46.284160    1143 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d36d14fe-b7d7-4649-a126-4cd681cb3d38 path="/var/lib/kubelet/pods/d36d14fe-b7d7-4649-a126-4cd681cb3d38/volumes"
	Apr 14 17:26:46 test-preload-120543 kubelet[1143]: E0414 17:26:46.893151    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 17:26:46 test-preload-120543 kubelet[1143]: E0414 17:26:46.893400    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume podName:1dc14478-f165-4a14-b4db-de28c2797797 nodeName:}" failed. No retries permitted until 2025-04-14 17:26:50.893345145 +0000 UTC m=+12.846373346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1dc14478-f165-4a14-b4db-de28c2797797-config-volume") pod "coredns-6d4b75cb6d-wg6pf" (UID: "1dc14478-f165-4a14-b4db-de28c2797797") : object "kube-system"/"coredns" not registered
	Apr 14 17:26:47 test-preload-120543 kubelet[1143]: E0414 17:26:47.277798    1143 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wg6pf" podUID=1dc14478-f165-4a14-b4db-de28c2797797
	
	
	==> storage-provisioner [16e5845ec0fd064eae416594f28387f2888cb7c482958ef6463e05152f0bd9cf] <==
	I0414 17:26:43.966431       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-120543 -n test-preload-120543
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-120543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-120543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-120543
--- FAIL: TestPreload (162.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (423.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m13.585763187s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-771697] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-771697" primary control-plane node in "kubernetes-upgrade-771697" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:32:50.111064  194818 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:32:50.111339  194818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:32:50.111350  194818 out.go:358] Setting ErrFile to fd 2...
	I0414 17:32:50.111354  194818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:32:50.111571  194818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:32:50.112159  194818 out.go:352] Setting JSON to false
	I0414 17:32:50.113060  194818 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8068,"bootTime":1744643902,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:32:50.113110  194818 start.go:139] virtualization: kvm guest
	I0414 17:32:50.115234  194818 out.go:177] * [kubernetes-upgrade-771697] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:32:50.116318  194818 notify.go:220] Checking for updates...
	I0414 17:32:50.116371  194818 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:32:50.117609  194818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:32:50.118793  194818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:32:50.120025  194818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:32:50.121275  194818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:32:50.122388  194818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:32:50.123659  194818 config.go:182] Loaded profile config "NoKubernetes-900958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0414 17:32:50.123757  194818 config.go:182] Loaded profile config "cert-expiration-560919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:32:50.123909  194818 config.go:182] Loaded profile config "pause-439119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:32:50.123994  194818 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:32:50.156458  194818 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 17:32:50.157471  194818 start.go:297] selected driver: kvm2
	I0414 17:32:50.157485  194818 start.go:901] validating driver "kvm2" against <nil>
	I0414 17:32:50.157495  194818 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:32:50.158205  194818 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:32:50.158277  194818 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:32:50.172948  194818 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:32:50.173010  194818 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 17:32:50.173265  194818 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 17:32:50.173298  194818 cni.go:84] Creating CNI manager for ""
	I0414 17:32:50.173351  194818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:32:50.173362  194818 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 17:32:50.173429  194818 start.go:340] cluster config:
	{Name:kubernetes-upgrade-771697 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-771697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:32:50.173533  194818 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:32:50.175001  194818 out.go:177] * Starting "kubernetes-upgrade-771697" primary control-plane node in "kubernetes-upgrade-771697" cluster
	I0414 17:32:50.176138  194818 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:32:50.176181  194818 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 17:32:50.176193  194818 cache.go:56] Caching tarball of preloaded images
	I0414 17:32:50.176281  194818 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:32:50.176297  194818 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 17:32:50.176441  194818 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/config.json ...
	I0414 17:32:50.176471  194818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/config.json: {Name:mk3b13ac82723d4c39317e00fc621a8353e17643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:32:50.176615  194818 start.go:360] acquireMachinesLock for kubernetes-upgrade-771697: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:33:27.906958  194818 start.go:364] duration metric: took 37.730317476s to acquireMachinesLock for "kubernetes-upgrade-771697"
	I0414 17:33:27.907030  194818 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-771697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-771697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:33:27.907166  194818 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 17:33:27.908767  194818 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 17:33:27.908965  194818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:33:27.909021  194818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:33:27.927304  194818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I0414 17:33:27.927679  194818 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:33:27.928173  194818 main.go:141] libmachine: Using API Version  1
	I0414 17:33:27.928193  194818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:33:27.928604  194818 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:33:27.928808  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetMachineName
	I0414 17:33:27.928953  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:33:27.929125  194818 start.go:159] libmachine.API.Create for "kubernetes-upgrade-771697" (driver="kvm2")
	I0414 17:33:27.929160  194818 client.go:168] LocalClient.Create starting
	I0414 17:33:27.929204  194818 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem
	I0414 17:33:27.929246  194818 main.go:141] libmachine: Decoding PEM data...
	I0414 17:33:27.929267  194818 main.go:141] libmachine: Parsing certificate...
	I0414 17:33:27.929352  194818 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem
	I0414 17:33:27.929383  194818 main.go:141] libmachine: Decoding PEM data...
	I0414 17:33:27.929399  194818 main.go:141] libmachine: Parsing certificate...
	I0414 17:33:27.929417  194818 main.go:141] libmachine: Running pre-create checks...
	I0414 17:33:27.929430  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .PreCreateCheck
	I0414 17:33:27.929849  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetConfigRaw
	I0414 17:33:27.930311  194818 main.go:141] libmachine: Creating machine...
	I0414 17:33:27.930328  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .Create
	I0414 17:33:27.930898  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) creating KVM machine...
	I0414 17:33:27.930927  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) creating network...
	I0414 17:33:27.931754  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found existing default KVM network
	I0414 17:33:27.932953  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:27.932794  195089 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:fb:8a} reservation:<nil>}
	I0414 17:33:27.933861  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:27.933784  195089 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:4a:54} reservation:<nil>}
	I0414 17:33:27.935015  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:27.934925  195089 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013c90}
	I0414 17:33:27.935033  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | created network xml: 
	I0414 17:33:27.935045  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | <network>
	I0414 17:33:27.935078  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |   <name>mk-kubernetes-upgrade-771697</name>
	I0414 17:33:27.935097  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |   <dns enable='no'/>
	I0414 17:33:27.935110  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |   
	I0414 17:33:27.935122  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 17:33:27.935140  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |     <dhcp>
	I0414 17:33:27.935157  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 17:33:27.935169  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |     </dhcp>
	I0414 17:33:27.935179  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |   </ip>
	I0414 17:33:27.935189  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG |   
	I0414 17:33:27.935199  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | </network>
	I0414 17:33:27.935210  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | 
	I0414 17:33:27.940775  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | trying to create private KVM network mk-kubernetes-upgrade-771697 192.168.61.0/24...
	I0414 17:33:28.021906  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | private KVM network mk-kubernetes-upgrade-771697 192.168.61.0/24 created
	I0414 17:33:28.021945  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:28.021864  195089 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:33:28.021960  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) setting up store path in /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697 ...
	I0414 17:33:28.021978  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) building disk image from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 17:33:28.022061  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Downloading /home/jenkins/minikube-integration/20349-149500/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 17:33:28.311969  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:28.311807  195089 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa...
	I0414 17:33:28.406932  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:28.406784  195089 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/kubernetes-upgrade-771697.rawdisk...
	I0414 17:33:28.406967  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Writing magic tar header
	I0414 17:33:28.406985  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Writing SSH key tar header
	I0414 17:33:28.406998  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:28.406950  195089 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697 ...
	I0414 17:33:28.407147  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697 (perms=drwx------)
	I0414 17:33:28.407174  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines (perms=drwxr-xr-x)
	I0414 17:33:28.407196  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697
	I0414 17:33:28.407210  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube (perms=drwxr-xr-x)
	I0414 17:33:28.407225  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) setting executable bit set on /home/jenkins/minikube-integration/20349-149500 (perms=drwxrwxr-x)
	I0414 17:33:28.407234  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 17:33:28.407246  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 17:33:28.407253  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) creating domain...
	I0414 17:33:28.407263  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines
	I0414 17:33:28.407278  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:33:28.407291  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500
	I0414 17:33:28.407305  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 17:33:28.407313  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | checking permissions on dir: /home/jenkins
	I0414 17:33:28.407321  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | checking permissions on dir: /home
	I0414 17:33:28.407332  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | skipping /home - not owner
	I0414 17:33:28.408611  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) define libvirt domain using xml: 
	I0414 17:33:28.408638  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) <domain type='kvm'>
	I0414 17:33:28.408650  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   <name>kubernetes-upgrade-771697</name>
	I0414 17:33:28.408659  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   <memory unit='MiB'>2200</memory>
	I0414 17:33:28.408667  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   <vcpu>2</vcpu>
	I0414 17:33:28.408674  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   <features>
	I0414 17:33:28.408683  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <acpi/>
	I0414 17:33:28.408689  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <apic/>
	I0414 17:33:28.408696  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <pae/>
	I0414 17:33:28.408701  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     
	I0414 17:33:28.408722  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   </features>
	I0414 17:33:28.408729  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   <cpu mode='host-passthrough'>
	I0414 17:33:28.408749  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   
	I0414 17:33:28.408755  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   </cpu>
	I0414 17:33:28.408762  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   <os>
	I0414 17:33:28.408768  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <type>hvm</type>
	I0414 17:33:28.408776  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <boot dev='cdrom'/>
	I0414 17:33:28.408782  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <boot dev='hd'/>
	I0414 17:33:28.408789  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <bootmenu enable='no'/>
	I0414 17:33:28.408796  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   </os>
	I0414 17:33:28.408804  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   <devices>
	I0414 17:33:28.408811  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <disk type='file' device='cdrom'>
	I0414 17:33:28.408825  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/boot2docker.iso'/>
	I0414 17:33:28.408833  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <target dev='hdc' bus='scsi'/>
	I0414 17:33:28.408841  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <readonly/>
	I0414 17:33:28.408847  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     </disk>
	I0414 17:33:28.408856  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <disk type='file' device='disk'>
	I0414 17:33:28.408865  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 17:33:28.408878  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/kubernetes-upgrade-771697.rawdisk'/>
	I0414 17:33:28.408886  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <target dev='hda' bus='virtio'/>
	I0414 17:33:28.408893  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     </disk>
	I0414 17:33:28.408900  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <interface type='network'>
	I0414 17:33:28.408909  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <source network='mk-kubernetes-upgrade-771697'/>
	I0414 17:33:28.408916  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <model type='virtio'/>
	I0414 17:33:28.408924  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     </interface>
	I0414 17:33:28.408931  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <interface type='network'>
	I0414 17:33:28.408939  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <source network='default'/>
	I0414 17:33:28.408948  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <model type='virtio'/>
	I0414 17:33:28.408956  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     </interface>
	I0414 17:33:28.408962  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <serial type='pty'>
	I0414 17:33:28.408970  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <target port='0'/>
	I0414 17:33:28.408976  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     </serial>
	I0414 17:33:28.408984  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <console type='pty'>
	I0414 17:33:28.408991  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <target type='serial' port='0'/>
	I0414 17:33:28.408999  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     </console>
	I0414 17:33:28.409014  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     <rng model='virtio'>
	I0414 17:33:28.409023  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)       <backend model='random'>/dev/random</backend>
	I0414 17:33:28.409030  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     </rng>
	I0414 17:33:28.409037  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     
	I0414 17:33:28.409042  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)     
	I0414 17:33:28.409050  194818 main.go:141] libmachine: (kubernetes-upgrade-771697)   </devices>
	I0414 17:33:28.409055  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) </domain>
	I0414 17:33:28.409066  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) 
	I0414 17:33:28.416351  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:24:a0:40 in network default
	I0414 17:33:28.417165  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) starting domain...
	I0414 17:33:28.417193  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:28.417202  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) ensuring networks are active...
	I0414 17:33:28.418137  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Ensuring network default is active
	I0414 17:33:28.418401  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Ensuring network mk-kubernetes-upgrade-771697 is active
	I0414 17:33:28.419011  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) getting domain XML...
	I0414 17:33:28.419971  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) creating domain...
	I0414 17:33:29.867080  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) waiting for IP...
	I0414 17:33:29.867951  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:29.868523  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:29.868554  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:29.868518  195089 retry.go:31] will retry after 195.374274ms: waiting for domain to come up
	I0414 17:33:30.066174  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:30.087225  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:30.087264  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:30.087175  195089 retry.go:31] will retry after 304.03338ms: waiting for domain to come up
	I0414 17:33:30.602091  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:30.602677  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:30.602697  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:30.602652  195089 retry.go:31] will retry after 323.240856ms: waiting for domain to come up
	I0414 17:33:30.928016  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:30.928576  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:30.928609  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:30.928534  195089 retry.go:31] will retry after 505.553218ms: waiting for domain to come up
	I0414 17:33:31.435244  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:31.435710  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:31.435790  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:31.435691  195089 retry.go:31] will retry after 660.606442ms: waiting for domain to come up
	I0414 17:33:32.097472  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:32.098370  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:32.098445  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:32.098337  195089 retry.go:31] will retry after 887.635013ms: waiting for domain to come up
	I0414 17:33:32.987895  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:32.988392  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:32.988450  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:32.988359  195089 retry.go:31] will retry after 839.571977ms: waiting for domain to come up
	I0414 17:33:33.829279  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:33.829722  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:33.829782  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:33.829682  195089 retry.go:31] will retry after 1.271109881s: waiting for domain to come up
	I0414 17:33:35.102932  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:35.103366  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:35.103401  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:35.103319  195089 retry.go:31] will retry after 1.33702759s: waiting for domain to come up
	I0414 17:33:36.441622  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:36.442101  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:36.442125  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:36.442074  195089 retry.go:31] will retry after 1.502057039s: waiting for domain to come up
	I0414 17:33:37.945355  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:37.945849  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:37.945881  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:37.945796  195089 retry.go:31] will retry after 2.492085825s: waiting for domain to come up
	I0414 17:33:40.439291  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:40.439695  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:40.439723  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:40.439654  195089 retry.go:31] will retry after 2.785175022s: waiting for domain to come up
	I0414 17:33:43.226829  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:43.227339  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:43.227365  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:43.227307  195089 retry.go:31] will retry after 4.521494644s: waiting for domain to come up
	I0414 17:33:47.753132  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:47.753687  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find current IP address of domain kubernetes-upgrade-771697 in network mk-kubernetes-upgrade-771697
	I0414 17:33:47.753710  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | I0414 17:33:47.753656  195089 retry.go:31] will retry after 5.586800961s: waiting for domain to come up
	I0414 17:33:53.341506  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:53.341961  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has current primary IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:53.341994  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) found domain IP: 192.168.61.160
	I0414 17:33:53.342007  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) reserving static IP address...
	I0414 17:33:53.342335  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-771697", mac: "52:54:00:5d:a4:eb", ip: "192.168.61.160"} in network mk-kubernetes-upgrade-771697
	I0414 17:33:53.415161  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) reserved static IP address 192.168.61.160 for domain kubernetes-upgrade-771697
	I0414 17:33:53.415192  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) waiting for SSH...
	I0414 17:33:53.415202  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Getting to WaitForSSH function...
	I0414 17:33:53.417552  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:53.417930  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697
	I0414 17:33:53.417969  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-771697 interface with MAC address 52:54:00:5d:a4:eb
	I0414 17:33:53.418156  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Using SSH client type: external
	I0414 17:33:53.418177  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa (-rw-------)
	I0414 17:33:53.418251  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:33:53.418280  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | About to run SSH command:
	I0414 17:33:53.418294  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | exit 0
	I0414 17:33:53.422155  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | SSH cmd err, output: exit status 255: 
	I0414 17:33:53.422173  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0414 17:33:53.422180  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | command : exit 0
	I0414 17:33:53.422187  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | err     : exit status 255
	I0414 17:33:53.422194  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | output  : 
	I0414 17:33:56.422512  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Getting to WaitForSSH function...
	I0414 17:33:56.424785  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.425105  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:56.425137  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.425250  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Using SSH client type: external
	I0414 17:33:56.425264  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa (-rw-------)
	I0414 17:33:56.425280  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:33:56.425286  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | About to run SSH command:
	I0414 17:33:56.425296  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | exit 0
	I0414 17:33:56.545811  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | SSH cmd err, output: <nil>: 
	I0414 17:33:56.546134  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) KVM machine creation complete
	I0414 17:33:56.546427  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetConfigRaw
	I0414 17:33:56.547175  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:33:56.547632  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:33:56.547781  194818 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 17:33:56.547796  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetState
	I0414 17:33:56.549134  194818 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 17:33:56.549150  194818 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 17:33:56.549158  194818 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 17:33:56.549176  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:56.551567  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.551955  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:56.551981  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.552122  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:56.552300  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:56.552433  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:56.552574  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:56.552734  194818 main.go:141] libmachine: Using SSH client type: native
	I0414 17:33:56.552957  194818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I0414 17:33:56.552967  194818 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 17:33:56.648741  194818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:33:56.648760  194818 main.go:141] libmachine: Detecting the provisioner...
	I0414 17:33:56.648767  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:56.651247  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.651576  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:56.651613  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.651785  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:56.651974  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:56.652128  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:56.652261  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:56.652418  194818 main.go:141] libmachine: Using SSH client type: native
	I0414 17:33:56.652661  194818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I0414 17:33:56.652674  194818 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 17:33:56.750234  194818 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 17:33:56.750291  194818 main.go:141] libmachine: found compatible host: buildroot
	I0414 17:33:56.750298  194818 main.go:141] libmachine: Provisioning with buildroot...
	I0414 17:33:56.750305  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetMachineName
	I0414 17:33:56.750527  194818 buildroot.go:166] provisioning hostname "kubernetes-upgrade-771697"
	I0414 17:33:56.750555  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetMachineName
	I0414 17:33:56.750693  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:56.753507  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.753896  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:56.753932  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.754087  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:56.754261  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:56.754399  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:56.754520  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:56.754699  194818 main.go:141] libmachine: Using SSH client type: native
	I0414 17:33:56.754945  194818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I0414 17:33:56.754959  194818 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-771697 && echo "kubernetes-upgrade-771697" | sudo tee /etc/hostname
	I0414 17:33:56.867297  194818 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-771697
	
	I0414 17:33:56.867328  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:56.870041  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.870450  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:56.870481  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.870626  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:56.870788  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:56.870953  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:56.871063  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:56.871231  194818 main.go:141] libmachine: Using SSH client type: native
	I0414 17:33:56.871429  194818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I0414 17:33:56.871455  194818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-771697' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-771697/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-771697' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:33:56.980854  194818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:33:56.980881  194818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:33:56.980898  194818 buildroot.go:174] setting up certificates
	I0414 17:33:56.980907  194818 provision.go:84] configureAuth start
	I0414 17:33:56.980915  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetMachineName
	I0414 17:33:56.981229  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetIP
	I0414 17:33:56.984035  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.984419  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:56.984448  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.984640  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:56.987029  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.987371  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:56.987410  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:56.987499  194818 provision.go:143] copyHostCerts
	I0414 17:33:56.987561  194818 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:33:56.987574  194818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:33:56.987627  194818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:33:56.987726  194818 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:33:56.987734  194818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:33:56.987755  194818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:33:56.987854  194818 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:33:56.987865  194818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:33:56.987884  194818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:33:56.987951  194818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-771697 san=[127.0.0.1 192.168.61.160 kubernetes-upgrade-771697 localhost minikube]
	I0414 17:33:57.298833  194818 provision.go:177] copyRemoteCerts
	I0414 17:33:57.298921  194818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:33:57.298957  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:57.301648  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.302007  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.302037  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.302194  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:57.302385  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:57.302569  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:57.302707  194818 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa Username:docker}
	I0414 17:33:57.379585  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:33:57.403979  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 17:33:57.426951  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 17:33:57.450485  194818 provision.go:87] duration metric: took 469.56444ms to configureAuth
	I0414 17:33:57.450515  194818 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:33:57.450714  194818 config.go:182] Loaded profile config "kubernetes-upgrade-771697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:33:57.450794  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:57.453447  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.453805  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.453855  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.453999  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:57.454170  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:57.454288  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:57.454378  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:57.454515  194818 main.go:141] libmachine: Using SSH client type: native
	I0414 17:33:57.454693  194818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I0414 17:33:57.454709  194818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:33:57.667580  194818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:33:57.667618  194818 main.go:141] libmachine: Checking connection to Docker...
	I0414 17:33:57.667627  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetURL
	I0414 17:33:57.668948  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | using libvirt version 6000000
	I0414 17:33:57.671197  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.671493  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.671516  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.671737  194818 main.go:141] libmachine: Docker is up and running!
	I0414 17:33:57.671755  194818 main.go:141] libmachine: Reticulating splines...
	I0414 17:33:57.671765  194818 client.go:171] duration metric: took 29.742592753s to LocalClient.Create
	I0414 17:33:57.671798  194818 start.go:167] duration metric: took 29.742696678s to libmachine.API.Create "kubernetes-upgrade-771697"
	I0414 17:33:57.671811  194818 start.go:293] postStartSetup for "kubernetes-upgrade-771697" (driver="kvm2")
	I0414 17:33:57.671828  194818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:33:57.671851  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:33:57.672088  194818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:33:57.672115  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:57.674272  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.674565  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.674582  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.674717  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:57.674885  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:57.675054  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:57.675180  194818 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa Username:docker}
	I0414 17:33:57.759687  194818 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:33:57.764300  194818 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:33:57.764324  194818 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:33:57.764402  194818 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:33:57.764495  194818 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:33:57.764613  194818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:33:57.774798  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:33:57.800914  194818 start.go:296] duration metric: took 129.086592ms for postStartSetup
	I0414 17:33:57.800954  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetConfigRaw
	I0414 17:33:57.801538  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetIP
	I0414 17:33:57.804110  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.804500  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.804532  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.804724  194818 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/config.json ...
	I0414 17:33:57.804903  194818 start.go:128] duration metric: took 29.897724548s to createHost
	I0414 17:33:57.804925  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:57.807201  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.807588  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.807617  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.807718  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:57.807920  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:57.808065  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:57.808264  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:57.808428  194818 main.go:141] libmachine: Using SSH client type: native
	I0414 17:33:57.808674  194818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I0414 17:33:57.808692  194818 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:33:57.906224  194818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652037.888368898
	
	I0414 17:33:57.906245  194818 fix.go:216] guest clock: 1744652037.888368898
	I0414 17:33:57.906252  194818 fix.go:229] Guest: 2025-04-14 17:33:57.888368898 +0000 UTC Remote: 2025-04-14 17:33:57.804913762 +0000 UTC m=+67.729082685 (delta=83.455136ms)
	I0414 17:33:57.906270  194818 fix.go:200] guest clock delta is within tolerance: 83.455136ms
	I0414 17:33:57.906275  194818 start.go:83] releasing machines lock for "kubernetes-upgrade-771697", held for 29.999278941s
	I0414 17:33:57.906296  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:33:57.906535  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetIP
	I0414 17:33:57.909499  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.909889  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.909934  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.910097  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:33:57.910659  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:33:57.910861  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:33:57.910976  194818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:33:57.911021  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:57.911129  194818 ssh_runner.go:195] Run: cat /version.json
	I0414 17:33:57.911157  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:33:57.916165  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.916469  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.916496  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.916532  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.916657  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:57.916817  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:57.916904  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:57.916931  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:57.916968  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:57.917091  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:33:57.917147  194818 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa Username:docker}
	I0414 17:33:57.917265  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:33:57.917390  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:33:57.917500  194818 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa Username:docker}
	I0414 17:33:57.995777  194818 ssh_runner.go:195] Run: systemctl --version
	I0414 17:33:58.021983  194818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:33:58.186673  194818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:33:58.193340  194818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:33:58.193414  194818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:33:58.209989  194818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:33:58.210012  194818 start.go:495] detecting cgroup driver to use...
	I0414 17:33:58.210076  194818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:33:58.230686  194818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:33:58.250805  194818 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:33:58.250861  194818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:33:58.264764  194818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:33:58.279082  194818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:33:58.394305  194818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:33:58.557101  194818 docker.go:233] disabling docker service ...
	I0414 17:33:58.557173  194818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:33:58.571540  194818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:33:58.583928  194818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:33:58.699911  194818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:33:58.815139  194818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:33:58.829160  194818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:33:58.848086  194818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 17:33:58.848159  194818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:33:58.859063  194818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:33:58.859143  194818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:33:58.869959  194818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:33:58.880398  194818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:33:58.891326  194818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:33:58.902231  194818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:33:58.913358  194818 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:33:58.913430  194818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:33:58.929464  194818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:33:58.940716  194818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:33:59.058002  194818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:33:59.369636  194818 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:33:59.369713  194818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:33:59.374623  194818 start.go:563] Will wait 60s for crictl version
	I0414 17:33:59.374678  194818 ssh_runner.go:195] Run: which crictl
	I0414 17:33:59.378670  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:33:59.423979  194818 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:33:59.424081  194818 ssh_runner.go:195] Run: crio --version
	I0414 17:33:59.452622  194818 ssh_runner.go:195] Run: crio --version
	I0414 17:33:59.608432  194818 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 17:33:59.633397  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetIP
	I0414 17:33:59.636334  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:59.636720  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:33:59.636751  194818 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:33:59.636977  194818 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 17:33:59.641924  194818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:33:59.659340  194818 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-771697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-771697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.160 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:33:59.659446  194818 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:33:59.659488  194818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:33:59.694110  194818 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:33:59.694231  194818 ssh_runner.go:195] Run: which lz4
	I0414 17:33:59.698680  194818 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:33:59.702977  194818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:33:59.703006  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 17:34:01.372229  194818 crio.go:462] duration metric: took 1.673575392s to copy over tarball
	I0414 17:34:01.372311  194818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:34:03.953986  194818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.581638524s)
	I0414 17:34:03.954031  194818 crio.go:469] duration metric: took 2.581770005s to extract the tarball
	I0414 17:34:03.954128  194818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:34:03.999621  194818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:34:04.050885  194818 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:34:04.050913  194818 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 17:34:04.050993  194818 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:34:04.051018  194818 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 17:34:04.051034  194818 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:34:04.051052  194818 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:34:04.050993  194818 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:34:04.051104  194818 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 17:34:04.051096  194818 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:34:04.051232  194818 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:34:04.053008  194818 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 17:34:04.053012  194818 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 17:34:04.053121  194818 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:34:04.053206  194818 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:34:04.053312  194818 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:34:04.053440  194818 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:34:04.053468  194818 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:34:04.053531  194818 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:34:04.188015  194818 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:34:04.197291  194818 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 17:34:04.198881  194818 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:34:04.205064  194818 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:34:04.206219  194818 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:34:04.207378  194818 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 17:34:04.229022  194818 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 17:34:04.331327  194818 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 17:34:04.331388  194818 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:34:04.331449  194818 ssh_runner.go:195] Run: which crictl
	I0414 17:34:04.348364  194818 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 17:34:04.348502  194818 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 17:34:04.348597  194818 ssh_runner.go:195] Run: which crictl
	I0414 17:34:04.369618  194818 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 17:34:04.369663  194818 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:34:04.369712  194818 ssh_runner.go:195] Run: which crictl
	I0414 17:34:04.369777  194818 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 17:34:04.369819  194818 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:34:04.369880  194818 ssh_runner.go:195] Run: which crictl
	I0414 17:34:04.391291  194818 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 17:34:04.391330  194818 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:34:04.391372  194818 ssh_runner.go:195] Run: which crictl
	I0414 17:34:04.391594  194818 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 17:34:04.391636  194818 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:34:04.391673  194818 ssh_runner.go:195] Run: which crictl
	I0414 17:34:04.395856  194818 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 17:34:04.395874  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:34:04.395892  194818 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 17:34:04.395916  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:34:04.395989  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:34:04.395925  194818 ssh_runner.go:195] Run: which crictl
	I0414 17:34:04.395933  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:34:04.400241  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:34:04.400291  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:34:04.414549  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:34:04.529992  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:34:04.558133  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:34:04.558133  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:34:04.558263  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:34:04.558369  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:34:04.584264  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:34:04.600633  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:34:04.633694  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:34:04.678732  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:34:04.685940  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:34:04.724697  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:34:04.724788  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:34:04.779926  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:34:04.795833  194818 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:34:04.795865  194818 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 17:34:04.845564  194818 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 17:34:04.845625  194818 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 17:34:04.883312  194818 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 17:34:04.883398  194818 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 17:34:04.885634  194818 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 17:34:04.895930  194818 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 17:34:05.036111  194818 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:34:05.175570  194818 cache_images.go:92] duration metric: took 1.124641532s to LoadCachedImages
	W0414 17:34:05.175661  194818 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0414 17:34:05.175673  194818 kubeadm.go:934] updating node { 192.168.61.160 8443 v1.20.0 crio true true} ...
	I0414 17:34:05.175796  194818 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-771697 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-771697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:34:05.175891  194818 ssh_runner.go:195] Run: crio config
	I0414 17:34:05.228196  194818 cni.go:84] Creating CNI manager for ""
	I0414 17:34:05.228228  194818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:05.228241  194818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:34:05.228259  194818 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.160 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-771697 NodeName:kubernetes-upgrade-771697 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 17:34:05.228445  194818 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-771697"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:34:05.228521  194818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 17:34:05.239859  194818 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:34:05.239952  194818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:34:05.250346  194818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0414 17:34:05.270474  194818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:34:05.291045  194818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0414 17:34:05.309403  194818 ssh_runner.go:195] Run: grep 192.168.61.160	control-plane.minikube.internal$ /etc/hosts
	I0414 17:34:05.313968  194818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:34:05.327810  194818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:34:05.461369  194818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:34:05.479474  194818 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697 for IP: 192.168.61.160
	I0414 17:34:05.479497  194818 certs.go:194] generating shared ca certs ...
	I0414 17:34:05.479532  194818 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:05.479717  194818 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:34:05.479781  194818 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:34:05.479796  194818 certs.go:256] generating profile certs ...
	I0414 17:34:05.479870  194818 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/client.key
	I0414 17:34:05.479889  194818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/client.crt with IP's: []
	I0414 17:34:05.766303  194818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/client.crt ...
	I0414 17:34:05.766335  194818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/client.crt: {Name:mk1eefc74be568d47f5868a6e48f57a4573ad2b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:05.766509  194818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/client.key ...
	I0414 17:34:05.766531  194818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/client.key: {Name:mkf444f6cb7513a939287fea90ceb5057f9ca488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:05.766664  194818 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.key.08bc9563
	I0414 17:34:05.766688  194818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.crt.08bc9563 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.160]
	I0414 17:34:05.973678  194818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.crt.08bc9563 ...
	I0414 17:34:05.973714  194818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.crt.08bc9563: {Name:mk18dd7bcc04391ae743eeb56ad64422e44bc6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:05.973910  194818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.key.08bc9563 ...
	I0414 17:34:05.973932  194818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.key.08bc9563: {Name:mk961e287575186a9b2f8a35ca3d9f65cc2e554b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:05.974015  194818 certs.go:381] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.crt.08bc9563 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.crt
	I0414 17:34:05.974118  194818 certs.go:385] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.key.08bc9563 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.key
	I0414 17:34:05.974181  194818 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/proxy-client.key
	I0414 17:34:05.974197  194818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/proxy-client.crt with IP's: []
	I0414 17:34:06.021283  194818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/proxy-client.crt ...
	I0414 17:34:06.021313  194818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/proxy-client.crt: {Name:mkf52be9b838430f910380b11b3fad1c6a29c109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:06.021464  194818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/proxy-client.key ...
	I0414 17:34:06.021476  194818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/proxy-client.key: {Name:mkb0efb12dc43817fccb9c97549734f57c639c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:06.021636  194818 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:34:06.021672  194818 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:34:06.021679  194818 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:34:06.021700  194818 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:34:06.021722  194818 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:34:06.021743  194818 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:34:06.021780  194818 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:34:06.022420  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:34:06.048228  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:34:06.073819  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:34:06.099792  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:34:06.124950  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 17:34:06.155231  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 17:34:06.183873  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:34:06.210655  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 17:34:06.236905  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:34:06.261413  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:34:06.287936  194818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:34:06.314733  194818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:34:06.338286  194818 ssh_runner.go:195] Run: openssl version
	I0414 17:34:06.346692  194818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:34:06.360756  194818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:34:06.369119  194818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:34:06.369174  194818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:34:06.375199  194818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:34:06.388549  194818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:34:06.408578  194818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:34:06.415929  194818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:34:06.416001  194818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:34:06.424630  194818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:34:06.446063  194818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:34:06.459840  194818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:34:06.467332  194818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:34:06.467396  194818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:34:06.473385  194818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:34:06.484526  194818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:34:06.489156  194818 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 17:34:06.489239  194818 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-771697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-771697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.160 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:34:06.489323  194818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:34:06.489364  194818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:34:06.533507  194818 cri.go:89] found id: ""
	I0414 17:34:06.533592  194818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:34:06.544256  194818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:34:06.553954  194818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:34:06.564034  194818 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:34:06.564062  194818 kubeadm.go:157] found existing configuration files:
	
	I0414 17:34:06.564119  194818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:34:06.576687  194818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:34:06.576762  194818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:34:06.588908  194818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:34:06.597792  194818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:34:06.597865  194818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:34:06.608536  194818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:34:06.617801  194818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:34:06.617885  194818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:34:06.627467  194818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:34:06.636655  194818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:34:06.636741  194818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:34:06.646124  194818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:34:06.787873  194818 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:34:06.787997  194818 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:34:06.953025  194818 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:34:06.953192  194818 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:34:06.953369  194818 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:34:07.153750  194818 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:34:07.321996  194818 out.go:235]   - Generating certificates and keys ...
	I0414 17:34:07.322256  194818 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:34:07.322418  194818 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:34:07.322575  194818 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 17:34:07.342291  194818 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 17:34:07.874968  194818 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 17:34:07.975921  194818 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 17:34:08.132358  194818 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 17:34:08.132675  194818 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-771697 localhost] and IPs [192.168.61.160 127.0.0.1 ::1]
	I0414 17:34:08.306341  194818 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 17:34:08.306609  194818 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-771697 localhost] and IPs [192.168.61.160 127.0.0.1 ::1]
	I0414 17:34:08.449132  194818 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 17:34:08.656555  194818 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 17:34:08.881955  194818 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 17:34:08.882055  194818 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:34:09.061139  194818 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:34:09.175989  194818 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:34:09.260024  194818 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:34:09.415685  194818 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:34:09.432742  194818 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:34:09.432892  194818 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:34:09.432953  194818 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:34:09.571876  194818 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:34:09.574470  194818 out.go:235]   - Booting up control plane ...
	I0414 17:34:09.574605  194818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:34:09.579146  194818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:34:09.580136  194818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:34:09.582988  194818 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:34:09.588338  194818 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:34:49.585986  194818 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:34:49.586296  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:34:49.586604  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:34:54.586250  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:34:54.586479  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:35:04.585598  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:35:04.585938  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:35:24.585587  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:35:24.585917  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:36:04.587981  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:36:04.588474  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:36:04.588495  194818 kubeadm.go:310] 
	I0414 17:36:04.588580  194818 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:36:04.588708  194818 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:36:04.588740  194818 kubeadm.go:310] 
	I0414 17:36:04.588839  194818 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:36:04.588923  194818 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:36:04.589177  194818 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:36:04.589196  194818 kubeadm.go:310] 
	I0414 17:36:04.589442  194818 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:36:04.589534  194818 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:36:04.589607  194818 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:36:04.589613  194818 kubeadm.go:310] 
	I0414 17:36:04.589883  194818 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:36:04.590073  194818 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:36:04.590079  194818 kubeadm.go:310] 
	I0414 17:36:04.590326  194818 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:36:04.590545  194818 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:36:04.590722  194818 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:36:04.591059  194818 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:36:04.591116  194818 kubeadm.go:310] 
	I0414 17:36:04.591355  194818 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:36:04.591605  194818 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:36:04.592173  194818 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 17:36:04.592322  194818 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-771697 localhost] and IPs [192.168.61.160 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-771697 localhost] and IPs [192.168.61.160 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-771697 localhost] and IPs [192.168.61.160 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-771697 localhost] and IPs [192.168.61.160 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 17:36:04.592367  194818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:36:06.212297  194818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.619903302s)
	I0414 17:36:06.212381  194818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:36:06.229199  194818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:36:06.239497  194818 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:36:06.239519  194818 kubeadm.go:157] found existing configuration files:
	
	I0414 17:36:06.239568  194818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:36:06.249392  194818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:36:06.249452  194818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:36:06.259245  194818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:36:06.268855  194818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:36:06.268916  194818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:36:06.282294  194818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:36:06.291632  194818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:36:06.291677  194818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:36:06.305345  194818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:36:06.315289  194818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:36:06.315348  194818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:36:06.328917  194818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:36:06.402355  194818 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:36:06.402509  194818 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:36:06.577968  194818 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:36:06.578084  194818 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:36:06.578221  194818 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:36:06.795665  194818 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:36:06.798255  194818 out.go:235]   - Generating certificates and keys ...
	I0414 17:36:06.798363  194818 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:36:06.798471  194818 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:36:06.798585  194818 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:36:06.798666  194818 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:36:06.798756  194818 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:36:06.798828  194818 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:36:06.798913  194818 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:36:06.799268  194818 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:36:06.799606  194818 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:36:06.800092  194818 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:36:06.800149  194818 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:36:06.800231  194818 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:36:06.857021  194818 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:36:07.071584  194818 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:36:07.346773  194818 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:36:07.613915  194818 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:36:07.637303  194818 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:36:07.639125  194818 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:36:07.639218  194818 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:36:07.842790  194818 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:36:07.844352  194818 out.go:235]   - Booting up control plane ...
	I0414 17:36:07.844478  194818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:36:07.866483  194818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:36:07.870930  194818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:36:07.876276  194818 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:36:07.880417  194818 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:36:47.883725  194818 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:36:47.884149  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:36:47.884416  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:36:52.885300  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:36:52.885582  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:37:02.885154  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:37:02.885404  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:37:22.884888  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:37:22.885166  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:38:02.885358  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:38:02.885626  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:38:02.885642  194818 kubeadm.go:310] 
	I0414 17:38:02.885698  194818 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:38:02.885764  194818 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:38:02.885777  194818 kubeadm.go:310] 
	I0414 17:38:02.885837  194818 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:38:02.885897  194818 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:38:02.886034  194818 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:38:02.886071  194818 kubeadm.go:310] 
	I0414 17:38:02.886247  194818 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:38:02.886318  194818 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:38:02.886373  194818 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:38:02.886382  194818 kubeadm.go:310] 
	I0414 17:38:02.886524  194818 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:38:02.886664  194818 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:38:02.886684  194818 kubeadm.go:310] 
	I0414 17:38:02.886854  194818 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:38:02.887004  194818 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:38:02.887099  194818 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:38:02.887213  194818 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:38:02.887228  194818 kubeadm.go:310] 
	I0414 17:38:02.887864  194818 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:38:02.887981  194818 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:38:02.888094  194818 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 17:38:02.888182  194818 kubeadm.go:394] duration metric: took 3m56.398946268s to StartCluster
	I0414 17:38:02.888235  194818 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:38:02.888291  194818 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:38:02.951160  194818 cri.go:89] found id: ""
	I0414 17:38:02.951202  194818 logs.go:282] 0 containers: []
	W0414 17:38:02.951216  194818 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:38:02.951225  194818 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:38:02.951296  194818 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:38:02.989810  194818 cri.go:89] found id: ""
	I0414 17:38:02.989860  194818 logs.go:282] 0 containers: []
	W0414 17:38:02.989872  194818 logs.go:284] No container was found matching "etcd"
	I0414 17:38:02.989889  194818 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:38:02.989953  194818 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:38:03.040430  194818 cri.go:89] found id: ""
	I0414 17:38:03.040461  194818 logs.go:282] 0 containers: []
	W0414 17:38:03.040473  194818 logs.go:284] No container was found matching "coredns"
	I0414 17:38:03.040480  194818 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:38:03.040537  194818 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:38:03.087106  194818 cri.go:89] found id: ""
	I0414 17:38:03.087138  194818 logs.go:282] 0 containers: []
	W0414 17:38:03.087150  194818 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:38:03.087159  194818 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:38:03.087231  194818 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:38:03.136891  194818 cri.go:89] found id: ""
	I0414 17:38:03.136925  194818 logs.go:282] 0 containers: []
	W0414 17:38:03.136950  194818 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:38:03.136958  194818 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:38:03.137026  194818 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:38:03.181384  194818 cri.go:89] found id: ""
	I0414 17:38:03.181419  194818 logs.go:282] 0 containers: []
	W0414 17:38:03.181431  194818 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:38:03.181441  194818 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:38:03.181510  194818 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:38:03.218916  194818 cri.go:89] found id: ""
	I0414 17:38:03.218939  194818 logs.go:282] 0 containers: []
	W0414 17:38:03.218949  194818 logs.go:284] No container was found matching "kindnet"
	I0414 17:38:03.218961  194818 logs.go:123] Gathering logs for kubelet ...
	I0414 17:38:03.218976  194818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:38:03.278140  194818 logs.go:123] Gathering logs for dmesg ...
	I0414 17:38:03.278182  194818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:38:03.294746  194818 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:38:03.294784  194818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:38:03.438687  194818 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:38:03.438720  194818 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:38:03.438739  194818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:38:03.597612  194818 logs.go:123] Gathering logs for container status ...
	I0414 17:38:03.597652  194818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 17:38:03.646037  194818 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 17:38:03.646120  194818 out.go:270] * 
	* 
	W0414 17:38:03.646196  194818 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:38:03.646213  194818 out.go:270] * 
	* 
	W0414 17:38:03.647014  194818 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 17:38:03.649758  194818 out.go:201] 
	W0414 17:38:03.650814  194818 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:38:03.650872  194818 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 17:38:03.650899  194818 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 17:38:03.652224  194818 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-771697
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-771697: (1.436624822s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-771697 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-771697 status --format={{.Host}}: exit status 7 (83.998842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.377850692s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-771697 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.149539ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-771697] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-771697
	    minikube start -p kubernetes-upgrade-771697 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7716972 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-771697 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-771697 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.38852566s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-14 17:39:48.129303435 +0000 UTC m=+4123.807631974
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-771697 -n kubernetes-upgrade-771697
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-771697 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-771697 logs -n 25: (2.505445349s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | systemctl cat kubelet                                |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | journalctl -xeu kubelet --all                        |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo cat                           | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo cat                           | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC |                     |
	|         | systemctl status docker --all                        |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | systemctl cat docker                                 |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo cat                           | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | /etc/docker/daemon.json                              |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo docker                        | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC |                     |
	|         | system info                                          |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC |                     |
	|         | systemctl status cri-docker                          |                   |         |         |                     |                     |
	|         | --all --full --no-pager                              |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | systemctl cat cri-docker                             |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo cat                           | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo cat                           | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | cri-dockerd --version                                |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC |                     |
	|         | systemctl status containerd                          |                   |         |         |                     |                     |
	|         | --all --full --no-pager                              |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | systemctl cat containerd                             |                   |         |         |                     |                     |
	|         | --no-pager                                           |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo cat                           | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | /lib/systemd/system/containerd.service               |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo cat                           | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | /etc/containerd/config.toml                          |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | containerd config dump                               |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | systemctl status crio --all                          |                   |         |         |                     |                     |
	|         | --full --no-pager                                    |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo                               | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | systemctl cat crio --no-pager                        |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo find                          | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                   |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                   |         |         |                     |                     |
	| ssh     | -p flannel-993774 sudo crio                          | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | config                                               |                   |         |         |                     |                     |
	| delete  | -p flannel-993774                                    | flannel-993774    | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	| ssh     | -p bridge-993774 pgrep -a                            | bridge-993774     | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC | 14 Apr 25 17:39 UTC |
	|         | kubelet                                              |                   |         |         |                     |                     |
	| start   | -p no-preload-721806                                 | no-preload-721806 | jenkins | v1.35.0 | 14 Apr 25 17:39 UTC |                     |
	|         | --memory=2200                                        |                   |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                   |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                   |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                   |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                   |         |         |                     |                     |
	|---------|------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:39:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:39:42.715365  208121 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:39:42.715647  208121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:39:42.715658  208121 out.go:358] Setting ErrFile to fd 2...
	I0414 17:39:42.715664  208121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:39:42.715935  208121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:39:42.717357  208121 out.go:352] Setting JSON to false
	I0414 17:39:42.718818  208121 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8481,"bootTime":1744643902,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:39:42.718892  208121 start.go:139] virtualization: kvm guest
	I0414 17:39:42.720306  208121 out.go:177] * [no-preload-721806] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:39:42.721250  208121 notify.go:220] Checking for updates...
	I0414 17:39:42.721259  208121 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:39:42.722487  208121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:39:42.723636  208121 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:39:42.724943  208121 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:39:42.726372  208121 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:39:42.727483  208121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:39:38.852620  204862 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d 5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e 0783d2882d3eaa07f574337c86546d3cdd95d36c2d01d39b0a2a0e1a858ecbfb 86b3ec76929456b1ba79d34505c38d00bfc2cbde9b7f076020adf115d65893a3 dba002baf77c7c1de681006708c9550c2470fcd278442197a69e18298cf61807 2fb8b574c605c5ed900dceb1d064ef60d0c627e5c31f718ca950ce600720ce1c b58d93e203cbce7f74bd608f6f26e6632cbb28d85bbc51799c86e9cbd34e55c0 6e68f73d5b6a3186b47243c8b913281c801349bcb1ec3339471c0d0f148a1759 fbf180750431e815250b8110f49cd9acdf19e4fcfed555c844f9f48360773417 d5b58fd7c3b30fd463af0f51d4c845dca72331ec9e69c39b1c413f5434c44c23: (14.563511443s)
	W0414 17:39:38.852708  204862 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d 5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e 0783d2882d3eaa07f574337c86546d3cdd95d36c2d01d39b0a2a0e1a858ecbfb 86b3ec76929456b1ba79d34505c38d00bfc2cbde9b7f076020adf115d65893a3 dba002baf77c7c1de681006708c9550c2470fcd278442197a69e18298cf61807 2fb8b574c605c5ed900dceb1d064ef60d0c627e5c31f718ca950ce600720ce1c b58d93e203cbce7f74bd608f6f26e6632cbb28d85bbc51799c86e9cbd34e55c0 6e68f73d5b6a3186b47243c8b913281c801349bcb1ec3339471c0d0f148a1759 fbf180750431e815250b8110f49cd9acdf19e4fcfed555c844f9f48360773417 d5b58fd7c3b30fd463af0f51d4c845dca72331ec9e69c39b1c413f5434c44c23: Process exited with status 1
	stdout:
	5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d
	5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e
	0783d2882d3eaa07f574337c86546d3cdd95d36c2d01d39b0a2a0e1a858ecbfb
	86b3ec76929456b1ba79d34505c38d00bfc2cbde9b7f076020adf115d65893a3
	dba002baf77c7c1de681006708c9550c2470fcd278442197a69e18298cf61807
	2fb8b574c605c5ed900dceb1d064ef60d0c627e5c31f718ca950ce600720ce1c
	b58d93e203cbce7f74bd608f6f26e6632cbb28d85bbc51799c86e9cbd34e55c0
	
	stderr:
	E0414 17:39:38.809919    4063 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e68f73d5b6a3186b47243c8b913281c801349bcb1ec3339471c0d0f148a1759\": container with ID starting with 6e68f73d5b6a3186b47243c8b913281c801349bcb1ec3339471c0d0f148a1759 not found: ID does not exist" containerID="6e68f73d5b6a3186b47243c8b913281c801349bcb1ec3339471c0d0f148a1759"
	time="2025-04-14T17:39:38Z" level=fatal msg="stopping the container \"6e68f73d5b6a3186b47243c8b913281c801349bcb1ec3339471c0d0f148a1759\": rpc error: code = NotFound desc = could not find container \"6e68f73d5b6a3186b47243c8b913281c801349bcb1ec3339471c0d0f148a1759\": container with ID starting with 6e68f73d5b6a3186b47243c8b913281c801349bcb1ec3339471c0d0f148a1759 not found: ID does not exist"
	I0414 17:39:38.852808  204862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:39:38.903624  204862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:39:38.915332  204862 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Apr 14 17:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Apr 14 17:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Apr 14 17:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Apr 14 17:38 /etc/kubernetes/scheduler.conf
	
	I0414 17:39:38.915382  204862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:39:38.926208  204862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:39:38.936358  204862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:39:38.949256  204862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:39:38.949317  204862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:39:38.961235  204862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:39:38.971247  204862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:39:38.971307  204862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:39:38.981477  204862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:39:38.993296  204862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:39:39.051804  204862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:39:39.784999  204862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:39:40.029482  204862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:39:40.105448  204862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:39:40.229353  204862 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:39:40.229436  204862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:39:40.730226  204862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:39:41.230319  204862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:39:41.262322  204862 api_server.go:72] duration metric: took 1.032967918s to wait for apiserver process to appear ...
	I0414 17:39:41.262350  204862 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:39:41.262374  204862 api_server.go:253] Checking apiserver healthz at https://192.168.61.160:8443/healthz ...
	I0414 17:39:42.729296  208121 config.go:182] Loaded profile config "bridge-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:39:42.729481  208121 config.go:182] Loaded profile config "kubernetes-upgrade-771697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:39:42.729671  208121 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:39:42.729808  208121 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:39:42.789211  208121 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 17:39:42.790531  208121 start.go:297] selected driver: kvm2
	I0414 17:39:42.790551  208121 start.go:901] validating driver "kvm2" against <nil>
	I0414 17:39:42.790571  208121 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:39:42.791399  208121 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.791510  208121 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:39:42.809663  208121 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:39:42.809720  208121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 17:39:42.810060  208121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:39:42.810123  208121 cni.go:84] Creating CNI manager for ""
	I0414 17:39:42.810185  208121 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:39:42.810198  208121 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 17:39:42.810260  208121 start.go:340] cluster config:
	{Name:no-preload-721806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-721806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:39:42.810404  208121 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.811986  208121 out.go:177] * Starting "no-preload-721806" primary control-plane node in "no-preload-721806" cluster
	I0414 17:39:42.813008  208121 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:39:42.813172  208121 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/no-preload-721806/config.json ...
	I0414 17:39:42.813218  208121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/no-preload-721806/config.json: {Name:mkcbd6e2bad1e6f256d82ce638f13ec5cfe7a0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:42.813275  208121 cache.go:107] acquiring lock: {Name:mkab61d32cff1691e5eb7ab96a0864baa099bed4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.813279  208121 cache.go:107] acquiring lock: {Name:mk84d48698ca7ce9e23eabe7024682a667bc9fcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.813317  208121 cache.go:107] acquiring lock: {Name:mk4626e1b849c3bd06ae3906120ccc10eadf27c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.813381  208121 cache.go:115] /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0414 17:39:42.813391  208121 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 124.976µs
	I0414 17:39:42.813391  208121 start.go:360] acquireMachinesLock for no-preload-721806: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:39:42.813401  208121 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0414 17:39:42.813415  208121 cache.go:107] acquiring lock: {Name:mkf37a780268a6d1ed4f4f92ad039489e9f61024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.813450  208121 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 17:39:42.813448  208121 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 17:39:42.813489  208121 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0414 17:39:42.813546  208121 cache.go:107] acquiring lock: {Name:mkd4d4e016dc69f182ea9a9c374369dd7d7ccdce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.813550  208121 cache.go:107] acquiring lock: {Name:mk67762552f4e6439266537f6ff522a0e39d2b95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.813613  208121 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 17:39:42.813645  208121 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0414 17:39:42.813768  208121 cache.go:107] acquiring lock: {Name:mkc6a6fc6055c3e928eb194abf4f81c43f629bff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.813793  208121 cache.go:107] acquiring lock: {Name:mk4dff693ed8257d38526e24b81e1c1a26651081 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:42.813861  208121 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 17:39:42.813890  208121 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 17:39:42.815472  208121 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0414 17:39:42.815500  208121 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 17:39:42.815541  208121 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 17:39:42.815606  208121 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 17:39:42.815472  208121 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 17:39:42.815766  208121 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 17:39:42.815868  208121 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0414 17:39:42.950683  208121 cache.go:162] opening:  /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0414 17:39:42.958091  208121 cache.go:162] opening:  /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0414 17:39:42.960354  208121 cache.go:162] opening:  /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0414 17:39:42.966060  208121 cache.go:162] opening:  /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0414 17:39:42.966965  208121 cache.go:162] opening:  /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0414 17:39:42.987448  208121 cache.go:162] opening:  /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0414 17:39:42.996198  208121 cache.go:162] opening:  /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0414 17:39:43.053967  208121 cache.go:157] /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0414 17:39:43.053992  208121 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 240.577836ms
	I0414 17:39:43.054002  208121 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0414 17:39:43.332957  208121 cache.go:157] /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0414 17:39:43.332990  208121 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 519.687262ms
	I0414 17:39:43.333007  208121 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0414 17:39:43.546840  208121 start.go:364] duration metric: took 733.426951ms to acquireMachinesLock for "no-preload-721806"
	I0414 17:39:43.546912  208121 start.go:93] Provisioning new machine with config: &{Name:no-preload-721806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:no-preload-721806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:39:43.547020  208121 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 17:39:41.423059  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.423639  206309 main.go:141] libmachine: (old-k8s-version-768580) found domain IP: 192.168.72.58
	I0414 17:39:41.423664  206309 main.go:141] libmachine: (old-k8s-version-768580) reserving static IP address...
	I0414 17:39:41.423688  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has current primary IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.424019  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"} in network mk-old-k8s-version-768580
	I0414 17:39:41.500117  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | Getting to WaitForSSH function...
	I0414 17:39:41.500149  206309 main.go:141] libmachine: (old-k8s-version-768580) reserved static IP address 192.168.72.58 for domain old-k8s-version-768580
	I0414 17:39:41.500160  206309 main.go:141] libmachine: (old-k8s-version-768580) waiting for SSH...
	I0414 17:39:41.503790  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.504277  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:41.504307  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.504542  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH client type: external
	I0414 17:39:41.504570  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa (-rw-------)
	I0414 17:39:41.504617  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:39:41.504632  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | About to run SSH command:
	I0414 17:39:41.504645  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | exit 0
	I0414 17:39:41.650429  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | SSH cmd err, output: <nil>: 
	I0414 17:39:41.650786  206309 main.go:141] libmachine: (old-k8s-version-768580) KVM machine creation complete
	I0414 17:39:41.651234  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:39:41.720559  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:41.720889  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:41.721094  206309 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 17:39:41.721168  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetState
	I0414 17:39:41.722982  206309 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 17:39:41.722998  206309 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 17:39:41.723006  206309 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 17:39:41.723015  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:41.726216  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.726577  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:41.726600  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.727533  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:41.727717  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.727870  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.728007  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:41.728195  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:41.728469  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:41.728482  206309 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 17:39:41.833225  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:39:41.833253  206309 main.go:141] libmachine: Detecting the provisioner...
	I0414 17:39:41.833265  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:41.835829  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.836187  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:41.836219  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.836342  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:41.836500  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.836683  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.836833  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:41.836999  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:41.837214  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:41.837226  206309 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 17:39:41.947434  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 17:39:41.947515  206309 main.go:141] libmachine: found compatible host: buildroot
	I0414 17:39:41.947522  206309 main.go:141] libmachine: Provisioning with buildroot...
	I0414 17:39:41.947529  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:39:41.947776  206309 buildroot.go:166] provisioning hostname "old-k8s-version-768580"
	I0414 17:39:41.947808  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:39:41.947998  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:41.952149  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.953630  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:41.953669  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.953955  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:41.954175  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.954317  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.954463  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:41.954605  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:41.954843  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:41.954859  206309 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-768580 && echo "old-k8s-version-768580" | sudo tee /etc/hostname
	I0414 17:39:42.077742  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-768580
	
	I0414 17:39:42.077779  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:42.393444  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.393900  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:42.393927  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.394066  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:42.394268  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:42.394415  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:42.394551  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:42.394719  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:42.395039  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:42.395066  206309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-768580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-768580/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-768580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:39:42.520647  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:39:42.520676  206309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:39:42.520723  206309 buildroot.go:174] setting up certificates
	I0414 17:39:42.520739  206309 provision.go:84] configureAuth start
	I0414 17:39:42.520754  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:39:42.521063  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:39:42.524518  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.524892  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:42.524916  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.525081  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:42.527885  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.528213  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:42.528232  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.528447  206309 provision.go:143] copyHostCerts
	I0414 17:39:42.528508  206309 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:39:42.528538  206309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:39:42.528644  206309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:39:42.528776  206309 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:39:42.528790  206309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:39:42.528837  206309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:39:42.528924  206309 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:39:42.528936  206309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:39:42.528972  206309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:39:42.529047  206309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-768580 san=[127.0.0.1 192.168.72.58 localhost minikube old-k8s-version-768580]
	I0414 17:39:42.842504  206309 provision.go:177] copyRemoteCerts
	I0414 17:39:42.842558  206309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:39:42.842589  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:42.845945  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.846351  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:42.846378  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.846579  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:42.846765  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:42.846933  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:42.847052  206309 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:39:42.933045  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:39:42.963827  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 17:39:42.992676  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 17:39:43.022320  206309 provision.go:87] duration metric: took 501.565888ms to configureAuth
	I0414 17:39:43.022341  206309 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:39:43.022479  206309 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:39:43.022585  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.026086  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.026455  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.026478  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.026607  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.026808  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.026978  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.027134  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.027281  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:43.027458  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:43.027470  206309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:39:43.289257  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:39:43.289285  206309 main.go:141] libmachine: Checking connection to Docker...
	I0414 17:39:43.289296  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetURL
	I0414 17:39:43.291038  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | using libvirt version 6000000
	I0414 17:39:43.293844  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.294256  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.294279  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.294429  206309 main.go:141] libmachine: Docker is up and running!
	I0414 17:39:43.294457  206309 main.go:141] libmachine: Reticulating splines...
	I0414 17:39:43.294467  206309 client.go:171] duration metric: took 25.29694666s to LocalClient.Create
	I0414 17:39:43.294490  206309 start.go:167] duration metric: took 25.297021937s to libmachine.API.Create "old-k8s-version-768580"
	I0414 17:39:43.294499  206309 start.go:293] postStartSetup for "old-k8s-version-768580" (driver="kvm2")
	I0414 17:39:43.294512  206309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:39:43.294539  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.294780  206309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:39:43.294808  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.297542  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.297849  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.297874  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.297992  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.298183  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.298332  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.298480  206309 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:39:43.387489  206309 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:39:43.393038  206309 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:39:43.393065  206309 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:39:43.393148  206309 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:39:43.393256  206309 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:39:43.393393  206309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:39:43.403782  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:39:43.431753  206309 start.go:296] duration metric: took 137.238727ms for postStartSetup
	I0414 17:39:43.431803  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:39:43.432345  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:39:43.435200  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.435632  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.435656  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.435905  206309 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:39:43.436065  206309 start.go:128] duration metric: took 25.459688147s to createHost
	I0414 17:39:43.436084  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.438591  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.438941  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.438967  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.439138  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.439342  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.439518  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.439686  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.439895  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:43.440160  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:43.440180  206309 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:39:43.546702  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652383.520454048
	
	I0414 17:39:43.546726  206309 fix.go:216] guest clock: 1744652383.520454048
	I0414 17:39:43.546735  206309 fix.go:229] Guest: 2025-04-14 17:39:43.520454048 +0000 UTC Remote: 2025-04-14 17:39:43.436074629 +0000 UTC m=+29.751364015 (delta=84.379419ms)
	I0414 17:39:43.546765  206309 fix.go:200] guest clock delta is within tolerance: 84.379419ms
	I0414 17:39:43.546772  206309 start.go:83] releasing machines lock for "old-k8s-version-768580", held for 25.570551162s
	I0414 17:39:43.546801  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.547104  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:39:43.550401  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.550768  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.550797  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.550932  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.551471  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.551658  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.551750  206309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:39:43.551789  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.551892  206309 ssh_runner.go:195] Run: cat /version.json
	I0414 17:39:43.551916  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.554584  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.554847  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.554892  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.554917  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.555095  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.555259  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.555297  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.555321  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.555437  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.555499  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.555568  206309 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:39:43.555586  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.555669  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.555764  206309 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:39:43.671272  206309 ssh_runner.go:195] Run: systemctl --version
	I0414 17:39:43.681482  206309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:39:43.569228  204862 api_server.go:279] https://192.168.61.160:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:39:43.569251  204862 api_server.go:103] status: https://192.168.61.160:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:39:43.569267  204862 api_server.go:253] Checking apiserver healthz at https://192.168.61.160:8443/healthz ...
	I0414 17:39:43.606030  204862 api_server.go:279] https://192.168.61.160:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:39:43.606061  204862 api_server.go:103] status: https://192.168.61.160:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:39:43.763355  204862 api_server.go:253] Checking apiserver healthz at https://192.168.61.160:8443/healthz ...
	I0414 17:39:43.768291  204862 api_server.go:279] https://192.168.61.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:39:43.768322  204862 api_server.go:103] status: https://192.168.61.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:39:44.262970  204862 api_server.go:253] Checking apiserver healthz at https://192.168.61.160:8443/healthz ...
	I0414 17:39:44.270329  204862 api_server.go:279] https://192.168.61.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:39:44.270365  204862 api_server.go:103] status: https://192.168.61.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:39:44.762527  204862 api_server.go:253] Checking apiserver healthz at https://192.168.61.160:8443/healthz ...
	I0414 17:39:44.781732  204862 api_server.go:279] https://192.168.61.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:39:44.781770  204862 api_server.go:103] status: https://192.168.61.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:39:45.262833  204862 api_server.go:253] Checking apiserver healthz at https://192.168.61.160:8443/healthz ...
	I0414 17:39:45.271336  204862 api_server.go:279] https://192.168.61.160:8443/healthz returned 200:
	ok
	I0414 17:39:45.280582  204862 api_server.go:141] control plane version: v1.32.2
	I0414 17:39:45.280614  204862 api_server.go:131] duration metric: took 4.018255606s to wait for apiserver health ...
	I0414 17:39:45.280626  204862 cni.go:84] Creating CNI manager for ""
	I0414 17:39:45.280636  204862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:39:45.282154  204862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:39:43.851355  206309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:39:43.858204  206309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:39:43.858287  206309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:39:43.878012  206309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:39:43.878034  206309 start.go:495] detecting cgroup driver to use...
	I0414 17:39:43.878118  206309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:39:43.903925  206309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:39:43.926852  206309 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:39:43.926926  206309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:39:43.947243  206309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:39:43.966830  206309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:39:44.156390  206309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:39:44.528090  206309 docker.go:233] disabling docker service ...
	I0414 17:39:44.528164  206309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:39:44.602105  206309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:39:44.629760  206309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:39:44.831044  206309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:39:44.985015  206309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:39:45.000517  206309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:39:45.030658  206309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 17:39:45.030738  206309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:39:45.041783  206309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:39:45.041880  206309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:39:45.052915  206309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:39:45.064386  206309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:39:45.077283  206309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:39:45.088191  206309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:39:45.098248  206309 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:39:45.098290  206309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:39:45.112555  206309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:39:45.124069  206309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:39:45.253393  206309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:39:45.392843  206309 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:39:45.392921  206309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:39:45.399668  206309 start.go:563] Will wait 60s for crictl version
	I0414 17:39:45.399730  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:45.404903  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:39:45.454416  206309 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:39:45.454515  206309 ssh_runner.go:195] Run: crio --version
	I0414 17:39:45.492346  206309 ssh_runner.go:195] Run: crio --version
	I0414 17:39:45.532360  206309 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 17:39:45.283516  204862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:39:45.328457  204862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:39:45.356357  204862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:39:45.362578  204862 system_pods.go:59] 8 kube-system pods found
	I0414 17:39:45.362641  204862 system_pods.go:61] "coredns-668d6bf9bc-mrtjv" [7a6ffe5a-d350-4bb7-89ae-e488b64ef60a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:39:45.362653  204862 system_pods.go:61] "coredns-668d6bf9bc-xznwp" [169496de-3680-47e0-a32c-a9f93ed0b619] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:39:45.362664  204862 system_pods.go:61] "etcd-kubernetes-upgrade-771697" [9b37a749-6f6c-45cb-a290-c54b6ee96e67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 17:39:45.362674  204862 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-771697" [78f98e66-e0e5-43ce-9ff1-794bf09323a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 17:39:45.362683  204862 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-771697" [8fb9cb46-2582-411c-b208-d8e141615bbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 17:39:45.362691  204862 system_pods.go:61] "kube-proxy-xg86l" [def9df07-a567-4cbe-8d6b-ed74663dfa47] Running
	I0414 17:39:45.362699  204862 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-771697" [b2d3328b-3391-4a94-aff3-d90d3dc16188] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 17:39:45.362707  204862 system_pods.go:61] "storage-provisioner" [82a80568-0d38-47f0-b9c6-08a053834b1d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 17:39:45.362714  204862 system_pods.go:74] duration metric: took 6.33057ms to wait for pod list to return data ...
	I0414 17:39:45.362723  204862 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:39:45.365492  204862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:39:45.365521  204862 node_conditions.go:123] node cpu capacity is 2
	I0414 17:39:45.365536  204862 node_conditions.go:105] duration metric: took 2.807122ms to run NodePressure ...
	I0414 17:39:45.365558  204862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:39:45.697939  204862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:39:45.714123  204862 ops.go:34] apiserver oom_adj: -16
	I0414 17:39:45.714153  204862 kubeadm.go:597] duration metric: took 21.511445724s to restartPrimaryControlPlane
	I0414 17:39:45.714165  204862 kubeadm.go:394] duration metric: took 21.655873721s to StartCluster
	I0414 17:39:45.714187  204862 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:45.714288  204862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:39:45.715478  204862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:45.715745  204862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.160 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:39:45.715894  204862 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:39:45.715973  204862 config.go:182] Loaded profile config "kubernetes-upgrade-771697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:39:45.715990  204862 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-771697"
	I0414 17:39:45.716009  204862 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-771697"
	I0414 17:39:45.716018  204862 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-771697"
	I0414 17:39:45.716030  204862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-771697"
	W0414 17:39:45.716030  204862 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:39:45.716161  204862 host.go:66] Checking if "kubernetes-upgrade-771697" exists ...
	I0414 17:39:45.716428  204862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:39:45.716464  204862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:39:45.716501  204862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:39:45.716530  204862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:39:45.717139  204862 out.go:177] * Verifying Kubernetes components...
	I0414 17:39:45.718122  204862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:39:45.734959  204862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40879
	I0414 17:39:45.735599  204862 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:39:45.736243  204862 main.go:141] libmachine: Using API Version  1
	I0414 17:39:45.736271  204862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:39:45.736658  204862 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:39:45.737253  204862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:39:45.737294  204862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:39:45.743999  204862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0414 17:39:45.744513  204862 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:39:45.745205  204862 main.go:141] libmachine: Using API Version  1
	I0414 17:39:45.745221  204862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:39:45.745639  204862 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:39:45.745864  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetState
	I0414 17:39:45.749476  204862 kapi.go:59] client config for kubernetes-upgrade-771697: &rest.Config{Host:"https://192.168.61.160:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/client.crt", KeyFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kubernetes-upgrade-771697/client.key", CAFile:"/home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 17:39:45.749910  204862 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-771697"
	W0414 17:39:45.749927  204862 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:39:45.749964  204862 host.go:66] Checking if "kubernetes-upgrade-771697" exists ...
	I0414 17:39:45.750345  204862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:39:45.750385  204862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:39:45.755063  204862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0414 17:39:45.755664  204862 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:39:45.756242  204862 main.go:141] libmachine: Using API Version  1
	I0414 17:39:45.756265  204862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:39:45.756621  204862 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:39:45.756797  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetState
	I0414 17:39:45.758713  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:39:45.760574  204862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:39:43.548598  208121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 17:39:43.548771  208121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:39:43.548809  208121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:39:43.566747  208121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0414 17:39:43.567185  208121 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:39:43.567651  208121 main.go:141] libmachine: Using API Version  1
	I0414 17:39:43.567672  208121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:39:43.567975  208121 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:39:43.568132  208121 main.go:141] libmachine: (no-preload-721806) Calling .GetMachineName
	I0414 17:39:43.568292  208121 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:39:43.568433  208121 start.go:159] libmachine.API.Create for "no-preload-721806" (driver="kvm2")
	I0414 17:39:43.568457  208121 client.go:168] LocalClient.Create starting
	I0414 17:39:43.568491  208121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem
	I0414 17:39:43.568525  208121 main.go:141] libmachine: Decoding PEM data...
	I0414 17:39:43.568546  208121 main.go:141] libmachine: Parsing certificate...
	I0414 17:39:43.568624  208121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem
	I0414 17:39:43.568649  208121 main.go:141] libmachine: Decoding PEM data...
	I0414 17:39:43.568667  208121 main.go:141] libmachine: Parsing certificate...
	I0414 17:39:43.568702  208121 main.go:141] libmachine: Running pre-create checks...
	I0414 17:39:43.568717  208121 main.go:141] libmachine: (no-preload-721806) Calling .PreCreateCheck
	I0414 17:39:43.569007  208121 main.go:141] libmachine: (no-preload-721806) Calling .GetConfigRaw
	I0414 17:39:43.569406  208121 main.go:141] libmachine: Creating machine...
	I0414 17:39:43.569423  208121 main.go:141] libmachine: (no-preload-721806) Calling .Create
	I0414 17:39:43.569643  208121 main.go:141] libmachine: (no-preload-721806) creating KVM machine...
	I0414 17:39:43.569664  208121 main.go:141] libmachine: (no-preload-721806) creating network...
	I0414 17:39:43.570915  208121 main.go:141] libmachine: (no-preload-721806) DBG | found existing default KVM network
	I0414 17:39:43.572564  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:43.572392  208145 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204b20}
	I0414 17:39:43.572597  208121 main.go:141] libmachine: (no-preload-721806) DBG | created network xml: 
	I0414 17:39:43.572685  208121 main.go:141] libmachine: (no-preload-721806) DBG | <network>
	I0414 17:39:43.572742  208121 main.go:141] libmachine: (no-preload-721806) DBG |   <name>mk-no-preload-721806</name>
	I0414 17:39:43.572772  208121 main.go:141] libmachine: (no-preload-721806) DBG |   <dns enable='no'/>
	I0414 17:39:43.572784  208121 main.go:141] libmachine: (no-preload-721806) DBG |   
	I0414 17:39:43.572794  208121 main.go:141] libmachine: (no-preload-721806) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 17:39:43.572805  208121 main.go:141] libmachine: (no-preload-721806) DBG |     <dhcp>
	I0414 17:39:43.572845  208121 main.go:141] libmachine: (no-preload-721806) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 17:39:43.572867  208121 main.go:141] libmachine: (no-preload-721806) DBG |     </dhcp>
	I0414 17:39:43.572879  208121 main.go:141] libmachine: (no-preload-721806) DBG |   </ip>
	I0414 17:39:43.572888  208121 main.go:141] libmachine: (no-preload-721806) DBG |   
	I0414 17:39:43.572898  208121 main.go:141] libmachine: (no-preload-721806) DBG | </network>
	I0414 17:39:43.572907  208121 main.go:141] libmachine: (no-preload-721806) DBG | 
	I0414 17:39:43.578270  208121 main.go:141] libmachine: (no-preload-721806) DBG | trying to create private KVM network mk-no-preload-721806 192.168.39.0/24...
	I0414 17:39:43.672546  208121 main.go:141] libmachine: (no-preload-721806) DBG | private KVM network mk-no-preload-721806 192.168.39.0/24 created
	I0414 17:39:43.672633  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:43.672493  208145 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:39:43.672650  208121 main.go:141] libmachine: (no-preload-721806) setting up store path in /home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806 ...
	I0414 17:39:43.672668  208121 main.go:141] libmachine: (no-preload-721806) building disk image from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 17:39:43.672706  208121 main.go:141] libmachine: (no-preload-721806) Downloading /home/jenkins/minikube-integration/20349-149500/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 17:39:44.029977  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:44.029354  208145 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa...
	I0414 17:39:44.311352  208121 cache.go:157] /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0414 17:39:44.311388  208121 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.497597114s
	I0414 17:39:44.311406  208121 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0414 17:39:44.373421  208121 main.go:141] libmachine: (no-preload-721806) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806 (perms=drwx------)
	I0414 17:39:44.373459  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:44.371769  208145 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/no-preload-721806.rawdisk...
	I0414 17:39:44.373471  208121 main.go:141] libmachine: (no-preload-721806) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines (perms=drwxr-xr-x)
	I0414 17:39:44.373494  208121 main.go:141] libmachine: (no-preload-721806) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube (perms=drwxr-xr-x)
	I0414 17:39:44.373504  208121 main.go:141] libmachine: (no-preload-721806) setting executable bit set on /home/jenkins/minikube-integration/20349-149500 (perms=drwxrwxr-x)
	I0414 17:39:44.373515  208121 main.go:141] libmachine: (no-preload-721806) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 17:39:44.373524  208121 main.go:141] libmachine: (no-preload-721806) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 17:39:44.373535  208121 main.go:141] libmachine: (no-preload-721806) creating domain...
	I0414 17:39:44.373546  208121 main.go:141] libmachine: (no-preload-721806) DBG | Writing magic tar header
	I0414 17:39:44.373560  208121 main.go:141] libmachine: (no-preload-721806) DBG | Writing SSH key tar header
	I0414 17:39:44.373572  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:44.372061  208145 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806 ...
	I0414 17:39:44.373583  208121 main.go:141] libmachine: (no-preload-721806) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806
	I0414 17:39:44.373596  208121 main.go:141] libmachine: (no-preload-721806) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines
	I0414 17:39:44.373631  208121 main.go:141] libmachine: (no-preload-721806) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:39:44.373654  208121 main.go:141] libmachine: (no-preload-721806) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500
	I0414 17:39:44.373674  208121 main.go:141] libmachine: (no-preload-721806) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 17:39:44.373718  208121 main.go:141] libmachine: (no-preload-721806) DBG | checking permissions on dir: /home/jenkins
	I0414 17:39:44.373741  208121 main.go:141] libmachine: (no-preload-721806) DBG | checking permissions on dir: /home
	I0414 17:39:44.374242  208121 main.go:141] libmachine: (no-preload-721806) DBG | skipping /home - not owner
	I0414 17:39:44.374269  208121 main.go:141] libmachine: (no-preload-721806) define libvirt domain using xml: 
	I0414 17:39:44.374281  208121 main.go:141] libmachine: (no-preload-721806) <domain type='kvm'>
	I0414 17:39:44.374290  208121 main.go:141] libmachine: (no-preload-721806)   <name>no-preload-721806</name>
	I0414 17:39:44.374305  208121 main.go:141] libmachine: (no-preload-721806)   <memory unit='MiB'>2200</memory>
	I0414 17:39:44.374314  208121 main.go:141] libmachine: (no-preload-721806)   <vcpu>2</vcpu>
	I0414 17:39:44.374322  208121 main.go:141] libmachine: (no-preload-721806)   <features>
	I0414 17:39:44.374327  208121 main.go:141] libmachine: (no-preload-721806)     <acpi/>
	I0414 17:39:44.374334  208121 main.go:141] libmachine: (no-preload-721806)     <apic/>
	I0414 17:39:44.374343  208121 main.go:141] libmachine: (no-preload-721806)     <pae/>
	I0414 17:39:44.374356  208121 main.go:141] libmachine: (no-preload-721806)     
	I0414 17:39:44.374363  208121 main.go:141] libmachine: (no-preload-721806)   </features>
	I0414 17:39:44.374375  208121 main.go:141] libmachine: (no-preload-721806)   <cpu mode='host-passthrough'>
	I0414 17:39:44.374382  208121 main.go:141] libmachine: (no-preload-721806)   
	I0414 17:39:44.374389  208121 main.go:141] libmachine: (no-preload-721806)   </cpu>
	I0414 17:39:44.374395  208121 main.go:141] libmachine: (no-preload-721806)   <os>
	I0414 17:39:44.374402  208121 main.go:141] libmachine: (no-preload-721806)     <type>hvm</type>
	I0414 17:39:44.374408  208121 main.go:141] libmachine: (no-preload-721806)     <boot dev='cdrom'/>
	I0414 17:39:44.374417  208121 main.go:141] libmachine: (no-preload-721806)     <boot dev='hd'/>
	I0414 17:39:44.374424  208121 main.go:141] libmachine: (no-preload-721806)     <bootmenu enable='no'/>
	I0414 17:39:44.374432  208121 main.go:141] libmachine: (no-preload-721806)   </os>
	I0414 17:39:44.374439  208121 main.go:141] libmachine: (no-preload-721806)   <devices>
	I0414 17:39:44.374448  208121 main.go:141] libmachine: (no-preload-721806)     <disk type='file' device='cdrom'>
	I0414 17:39:44.374457  208121 main.go:141] libmachine: (no-preload-721806)       <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/boot2docker.iso'/>
	I0414 17:39:44.374468  208121 main.go:141] libmachine: (no-preload-721806)       <target dev='hdc' bus='scsi'/>
	I0414 17:39:44.374474  208121 main.go:141] libmachine: (no-preload-721806)       <readonly/>
	I0414 17:39:44.374482  208121 main.go:141] libmachine: (no-preload-721806)     </disk>
	I0414 17:39:44.374490  208121 main.go:141] libmachine: (no-preload-721806)     <disk type='file' device='disk'>
	I0414 17:39:44.374501  208121 main.go:141] libmachine: (no-preload-721806)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 17:39:44.374513  208121 main.go:141] libmachine: (no-preload-721806)       <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/no-preload-721806.rawdisk'/>
	I0414 17:39:44.374523  208121 main.go:141] libmachine: (no-preload-721806)       <target dev='hda' bus='virtio'/>
	I0414 17:39:44.374529  208121 main.go:141] libmachine: (no-preload-721806)     </disk>
	I0414 17:39:44.374538  208121 main.go:141] libmachine: (no-preload-721806)     <interface type='network'>
	I0414 17:39:44.374546  208121 main.go:141] libmachine: (no-preload-721806)       <source network='mk-no-preload-721806'/>
	I0414 17:39:44.374555  208121 main.go:141] libmachine: (no-preload-721806)       <model type='virtio'/>
	I0414 17:39:44.374562  208121 main.go:141] libmachine: (no-preload-721806)     </interface>
	I0414 17:39:44.374571  208121 main.go:141] libmachine: (no-preload-721806)     <interface type='network'>
	I0414 17:39:44.374579  208121 main.go:141] libmachine: (no-preload-721806)       <source network='default'/>
	I0414 17:39:44.374589  208121 main.go:141] libmachine: (no-preload-721806)       <model type='virtio'/>
	I0414 17:39:44.374596  208121 main.go:141] libmachine: (no-preload-721806)     </interface>
	I0414 17:39:44.374604  208121 main.go:141] libmachine: (no-preload-721806)     <serial type='pty'>
	I0414 17:39:44.374613  208121 main.go:141] libmachine: (no-preload-721806)       <target port='0'/>
	I0414 17:39:44.374621  208121 main.go:141] libmachine: (no-preload-721806)     </serial>
	I0414 17:39:44.374628  208121 main.go:141] libmachine: (no-preload-721806)     <console type='pty'>
	I0414 17:39:44.374637  208121 main.go:141] libmachine: (no-preload-721806)       <target type='serial' port='0'/>
	I0414 17:39:44.374643  208121 main.go:141] libmachine: (no-preload-721806)     </console>
	I0414 17:39:44.374651  208121 main.go:141] libmachine: (no-preload-721806)     <rng model='virtio'>
	I0414 17:39:44.374658  208121 main.go:141] libmachine: (no-preload-721806)       <backend model='random'>/dev/random</backend>
	I0414 17:39:44.374665  208121 main.go:141] libmachine: (no-preload-721806)     </rng>
	I0414 17:39:44.374670  208121 main.go:141] libmachine: (no-preload-721806)     
	I0414 17:39:44.374678  208121 main.go:141] libmachine: (no-preload-721806)     
	I0414 17:39:44.374688  208121 main.go:141] libmachine: (no-preload-721806)   </devices>
	I0414 17:39:44.374696  208121 main.go:141] libmachine: (no-preload-721806) </domain>
	I0414 17:39:44.374701  208121 main.go:141] libmachine: (no-preload-721806) 
	I0414 17:39:44.378443  208121 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:7b:e0:75 in network default
	I0414 17:39:44.379106  208121 main.go:141] libmachine: (no-preload-721806) starting domain...
	I0414 17:39:44.379118  208121 main.go:141] libmachine: (no-preload-721806) ensuring networks are active...
	I0414 17:39:44.379163  208121 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:39:44.379949  208121 main.go:141] libmachine: (no-preload-721806) Ensuring network default is active
	I0414 17:39:44.380342  208121 main.go:141] libmachine: (no-preload-721806) Ensuring network mk-no-preload-721806 is active
	I0414 17:39:44.381168  208121 main.go:141] libmachine: (no-preload-721806) getting domain XML...
	I0414 17:39:44.382012  208121 main.go:141] libmachine: (no-preload-721806) creating domain...
	I0414 17:39:44.442731  208121 cache.go:157] /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0414 17:39:44.442769  208121 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 1.629001479s
	I0414 17:39:44.442784  208121 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0414 17:39:44.542863  208121 cache.go:157] /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0414 17:39:44.542894  208121 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 1.729348753s
	I0414 17:39:44.542909  208121 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0414 17:39:44.578745  208121 cache.go:157] /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0414 17:39:44.578840  208121 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 1.765566878s
	I0414 17:39:44.578859  208121 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0414 17:39:44.879482  208121 cache.go:157] /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0414 17:39:44.879533  208121 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.065987695s
	I0414 17:39:44.879551  208121 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0414 17:39:44.879575  208121 cache.go:87] Successfully saved all images to host disk.
	I0414 17:39:46.330162  208121 main.go:141] libmachine: (no-preload-721806) waiting for IP...
	I0414 17:39:46.331204  208121 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:39:46.331766  208121 main.go:141] libmachine: (no-preload-721806) DBG | unable to find current IP address of domain no-preload-721806 in network mk-no-preload-721806
	I0414 17:39:46.331794  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:46.331742  208145 retry.go:31] will retry after 268.617712ms: waiting for domain to come up
	I0414 17:39:46.602207  208121 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:39:46.604471  208121 main.go:141] libmachine: (no-preload-721806) DBG | unable to find current IP address of domain no-preload-721806 in network mk-no-preload-721806
	I0414 17:39:46.604527  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:46.604439  208145 retry.go:31] will retry after 239.801669ms: waiting for domain to come up
	I0414 17:39:46.846159  208121 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:39:46.847112  208121 main.go:141] libmachine: (no-preload-721806) DBG | unable to find current IP address of domain no-preload-721806 in network mk-no-preload-721806
	I0414 17:39:46.847138  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:46.847007  208145 retry.go:31] will retry after 429.247885ms: waiting for domain to come up
	I0414 17:39:47.277916  208121 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:39:47.278684  208121 main.go:141] libmachine: (no-preload-721806) DBG | unable to find current IP address of domain no-preload-721806 in network mk-no-preload-721806
	I0414 17:39:47.278710  208121 main.go:141] libmachine: (no-preload-721806) DBG | I0414 17:39:47.278590  208145 retry.go:31] will retry after 530.733487ms: waiting for domain to come up
	I0414 17:39:45.761715  204862 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:39:45.761732  204862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:39:45.761753  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:39:45.774748  204862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0414 17:39:45.774961  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:39:45.775496  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:38:34 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:39:45.775522  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:39:45.775572  204862 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:39:45.776224  204862 main.go:141] libmachine: Using API Version  1
	I0414 17:39:45.776240  204862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:39:45.776442  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:39:45.776861  204862 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:39:45.777700  204862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:39:45.777739  204862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:39:45.780010  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:39:45.780252  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:39:45.780449  204862 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa Username:docker}
	I0414 17:39:45.802488  204862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40481
	I0414 17:39:45.803059  204862 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:39:45.803619  204862 main.go:141] libmachine: Using API Version  1
	I0414 17:39:45.803639  204862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:39:45.804190  204862 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:39:45.804440  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetState
	I0414 17:39:45.813596  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .DriverName
	I0414 17:39:45.813851  204862 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:39:45.813870  204862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:39:45.813894  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHHostname
	I0414 17:39:45.821164  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:39:45.821878  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:a4:eb", ip: ""} in network mk-kubernetes-upgrade-771697: {Iface:virbr3 ExpiryTime:2025-04-14 18:38:34 +0000 UTC Type:0 Mac:52:54:00:5d:a4:eb Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:kubernetes-upgrade-771697 Clientid:01:52:54:00:5d:a4:eb}
	I0414 17:39:45.822000  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | domain kubernetes-upgrade-771697 has defined IP address 192.168.61.160 and MAC address 52:54:00:5d:a4:eb in network mk-kubernetes-upgrade-771697
	I0414 17:39:45.822386  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHPort
	I0414 17:39:45.822593  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHKeyPath
	I0414 17:39:45.822746  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .GetSSHUsername
	I0414 17:39:45.822945  204862 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/kubernetes-upgrade-771697/id_rsa Username:docker}
	I0414 17:39:46.070710  204862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:39:46.107457  204862 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:39:46.107643  204862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:39:46.131274  204862 api_server.go:72] duration metric: took 415.481214ms to wait for apiserver process to appear ...
	I0414 17:39:46.131302  204862 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:39:46.131324  204862 api_server.go:253] Checking apiserver healthz at https://192.168.61.160:8443/healthz ...
	I0414 17:39:46.137537  204862 api_server.go:279] https://192.168.61.160:8443/healthz returned 200:
	ok
	I0414 17:39:46.139093  204862 api_server.go:141] control plane version: v1.32.2
	I0414 17:39:46.139168  204862 api_server.go:131] duration metric: took 7.856519ms to wait for apiserver health ...
	I0414 17:39:46.139199  204862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:39:46.144923  204862 system_pods.go:59] 8 kube-system pods found
	I0414 17:39:46.145004  204862 system_pods.go:61] "coredns-668d6bf9bc-mrtjv" [7a6ffe5a-d350-4bb7-89ae-e488b64ef60a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:39:46.145030  204862 system_pods.go:61] "coredns-668d6bf9bc-xznwp" [169496de-3680-47e0-a32c-a9f93ed0b619] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:39:46.145496  204862 system_pods.go:61] "etcd-kubernetes-upgrade-771697" [9b37a749-6f6c-45cb-a290-c54b6ee96e67] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 17:39:46.145562  204862 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-771697" [78f98e66-e0e5-43ce-9ff1-794bf09323a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 17:39:46.145584  204862 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-771697" [8fb9cb46-2582-411c-b208-d8e141615bbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 17:39:46.145595  204862 system_pods.go:61] "kube-proxy-xg86l" [def9df07-a567-4cbe-8d6b-ed74663dfa47] Running
	I0414 17:39:46.145608  204862 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-771697" [b2d3328b-3391-4a94-aff3-d90d3dc16188] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 17:39:46.145642  204862 system_pods.go:61] "storage-provisioner" [82a80568-0d38-47f0-b9c6-08a053834b1d] Running
	I0414 17:39:46.145667  204862 system_pods.go:74] duration metric: took 6.448147ms to wait for pod list to return data ...
	I0414 17:39:46.145689  204862 kubeadm.go:582] duration metric: took 429.910552ms to wait for: map[apiserver:true system_pods:true]
	I0414 17:39:46.145736  204862 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:39:46.151588  204862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:39:46.151659  204862 node_conditions.go:123] node cpu capacity is 2
	I0414 17:39:46.151692  204862 node_conditions.go:105] duration metric: took 5.93684ms to run NodePressure ...
	I0414 17:39:46.151713  204862 start.go:241] waiting for startup goroutines ...
	I0414 17:39:46.327419  204862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:39:46.335704  204862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:39:48.020398  204862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.684652432s)
	I0414 17:39:48.020464  204862 main.go:141] libmachine: Making call to close driver server
	I0414 17:39:48.020477  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .Close
	I0414 17:39:48.020632  204862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.69313734s)
	I0414 17:39:48.020662  204862 main.go:141] libmachine: Making call to close driver server
	I0414 17:39:48.020672  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .Close
	I0414 17:39:48.020784  204862 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:39:48.020797  204862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:39:48.020806  204862 main.go:141] libmachine: Making call to close driver server
	I0414 17:39:48.020812  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .Close
	I0414 17:39:48.021180  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Closing plugin on server side
	I0414 17:39:48.021212  204862 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:39:48.021219  204862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:39:48.021480  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Closing plugin on server side
	I0414 17:39:48.021508  204862 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:39:48.021514  204862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:39:48.021522  204862 main.go:141] libmachine: Making call to close driver server
	I0414 17:39:48.021529  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .Close
	I0414 17:39:48.026157  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Closing plugin on server side
	I0414 17:39:48.026184  204862 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:39:48.026205  204862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:39:48.030476  204862 main.go:141] libmachine: Making call to close driver server
	I0414 17:39:48.030497  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) Calling .Close
	I0414 17:39:48.032156  204862 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:39:48.032173  204862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:39:48.032176  204862 main.go:141] libmachine: (kubernetes-upgrade-771697) DBG | Closing plugin on server side
	I0414 17:39:48.033672  204862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 17:39:48.035084  204862 addons.go:514] duration metric: took 2.319192529s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 17:39:48.035131  204862 start.go:246] waiting for cluster config update ...
	I0414 17:39:48.035145  204862 start.go:255] writing updated cluster config ...
	I0414 17:39:48.035426  204862 ssh_runner.go:195] Run: rm -f paused
	I0414 17:39:48.107724  204862 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:39:48.109129  204862 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-771697" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.004917437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744652389004883999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa04eb0e-9bf8-4a9a-bd25-96c5c2a3c36f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.005410004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62cdc504-ecc2-4b2c-820a-e58b4e401d6c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.005501901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62cdc504-ecc2-4b2c-820a-e58b4e401d6c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.005875259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ee5ff7a67c0e7464f25fbccd88bda36a27b5044b8db2639efb961e2f4ab26a5,PodSandboxId:90d974009875dab6af0f7dfcfa05e9a0c90028dbb4f98f6dab986647f9eb6bc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744652384542102958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mrtjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6ffe5a-d350-4bb7-89ae-e488b64ef60a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85d55c410fd631ee092cd69b81b8a6d6bd1d2818b19c9d61e701c93c4a667b9c,PodSandboxId:e503697f7815fa09bd688d65bc24793a1d6342edef7365b447202eecd6c07b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744652384632276900,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xznwp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 169496de-3680-47e0-a32c-a9f93ed0b619,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5c5b7305b5d7c88c54eb3b45f6ccb598e1c6b576fd40660aa60debd89b6c44,PodSandboxId:f98c6cef99decc199466a145fb7056ae17170ee9c966ac40697e82aa8130a994,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1744652384462067761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a80568-0d38-47f0-b9c6-08a053834b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731929e3aa0bdaaee3a14fd79595e05a8464bb2d83e77db7bb0bdceaa72391c2,PodSandboxId:1a8cbde5de93f949edddbafc1e8fd3331730ff526d2375803d33cbaeced4b70b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNI
NG,CreatedAt:1744652380622189045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed366ee78c2ae00743664ccec03057c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ab3b12e65a373278e9b7b6085f348121c7b8dbe21f278c1875c898f56ebf60,PodSandboxId:5d78f3a60eee230a7f50c731b1309049c1eabf11dc805798638aa63dd28d46c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:17
44652380625082234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d749dc95b3daec9b24bc3b7ad245422,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63b442004902710a17b0cbcb8e0b210c0ac6764d5b5b6ef67c4db85934a4f70,PodSandboxId:cd1e05870f648b1fed78cdf3b65f1af07517bb287110cf0ee9bb0c5fb64602a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744652380634334517,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deea28d5b95a0161001482c6e5fd4ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac016904a4213b99a47070ae1564d6c7a7e1e6cc619cff58846307451f64ed18,PodSandboxId:bd9fbc4430f99abae5bd6a27d98ce484f294d61d1039425ec302e0b15f3395a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744652378571514406,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xg86l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: def9df07-a567-4cbe-8d6b-ed74663dfa47,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3f34f6ca49f0c4d1604227ce426ae060a16a2d1ac34ebef098b6af589bc49a,PodSandboxId:f98c6cef99decc199466a145fb7056ae17170ee9c966ac40697e82aa8130a994,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744652377577956366,Labels:map[string]string{io.kubernetes.con
tainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a80568-0d38-47f0-b9c6-08a053834b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805e7a887e2611f8c5973feaff11ce93dc2b9dc56a8e44ad5f1bb406c0906313,PodSandboxId:82f5dad270ae466c4fa8ea754805ed9646d0d855a6a8569057728bdf56045f27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744652376572320706,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc3a8f0901dac16f097d4628c5c2d4f2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d,PodSandboxId:e503697f7815fa09bd688d65bc24793a1d6342edef7365b447202eecd6c07b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744652363380105821,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xznwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 169496de-3680-47e0-a32c-a9f93ed0b619,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e,PodSandboxId:90d974009875dab6af0f7dfcfa05e9a0c90028dbb4f98f6dab986647f9eb6bc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744652363266541332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mrtjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6ffe5a-d350-4bb7-89ae-e488b64ef60a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0783d2882d3eaa07f574337c86546d3cdd95d36c2d01d39b0a2a0e1a858ecbfb,PodSandboxId:4db82cb2f41211686cdf44519ad977d84479c8f35e4e20e096ed408f7c4f
c05b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744652359846036535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xg86l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: def9df07-a567-4cbe-8d6b-ed74663dfa47,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b3ec76929456b1ba79d34505c38d00bfc2cbde9b7f076020adf115d65893a3,PodSandboxId:8b9ef04d4cccf65a2f7a91ce47f116fc3c534030771510833c016abe6fb562db,Metadata:&ContainerMetadata{Na
me:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744652359750448395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d749dc95b3daec9b24bc3b7ad245422,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba002baf77c7c1de681006708c9550c2470fcd278442197a69e18298cf61807,PodSandboxId:d060c52e5c2eefe578ecd55d6c8853cbdbc85d812aa905136c335c05f37e95ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1
,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744652359722397145,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc3a8f0901dac16f097d4628c5c2d4f2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d93e203cbce7f74bd608f6f26e6632cbb28d85bbc51799c86e9cbd34e55c0,PodSandboxId:7705857010b687d5366127f743a0453cb6ce1684c39838de040f5015e4bfb202,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744652359643413582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed366ee78c2ae00743664ccec03057c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb8b574c605c5ed900dceb1d064ef60d0c627e5c31f718ca950ce600720ce1c,PodSandboxId:d46b277d6d9c5e9074cb273a5230e90970aec7f71f2adf698cd791ab97aaa6a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744652359681986196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deea28d5b95a0161001482c6e5fd4ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62cdc504-ecc2-4b2c-820a-e58b4e401d6c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.061370959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7c1e43c-c746-4b18-b653-e05f33df9f65 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.061504984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7c1e43c-c746-4b18-b653-e05f33df9f65 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.062845617Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=188a8bea-cd15-4d67-9197-1a2c523a1224 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.063225594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744652389063205416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=188a8bea-cd15-4d67-9197-1a2c523a1224 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.064033956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7aef00da-0e46-49ce-b05a-5549bad6fbde name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.064108668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7aef00da-0e46-49ce-b05a-5549bad6fbde name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.064438497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ee5ff7a67c0e7464f25fbccd88bda36a27b5044b8db2639efb961e2f4ab26a5,PodSandboxId:90d974009875dab6af0f7dfcfa05e9a0c90028dbb4f98f6dab986647f9eb6bc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744652384542102958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mrtjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6ffe5a-d350-4bb7-89ae-e488b64ef60a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85d55c410fd631ee092cd69b81b8a6d6bd1d2818b19c9d61e701c93c4a667b9c,PodSandboxId:e503697f7815fa09bd688d65bc24793a1d6342edef7365b447202eecd6c07b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744652384632276900,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xznwp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 169496de-3680-47e0-a32c-a9f93ed0b619,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5c5b7305b5d7c88c54eb3b45f6ccb598e1c6b576fd40660aa60debd89b6c44,PodSandboxId:f98c6cef99decc199466a145fb7056ae17170ee9c966ac40697e82aa8130a994,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1744652384462067761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a80568-0d38-47f0-b9c6-08a053834b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731929e3aa0bdaaee3a14fd79595e05a8464bb2d83e77db7bb0bdceaa72391c2,PodSandboxId:1a8cbde5de93f949edddbafc1e8fd3331730ff526d2375803d33cbaeced4b70b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNI
NG,CreatedAt:1744652380622189045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed366ee78c2ae00743664ccec03057c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ab3b12e65a373278e9b7b6085f348121c7b8dbe21f278c1875c898f56ebf60,PodSandboxId:5d78f3a60eee230a7f50c731b1309049c1eabf11dc805798638aa63dd28d46c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:17
44652380625082234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d749dc95b3daec9b24bc3b7ad245422,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63b442004902710a17b0cbcb8e0b210c0ac6764d5b5b6ef67c4db85934a4f70,PodSandboxId:cd1e05870f648b1fed78cdf3b65f1af07517bb287110cf0ee9bb0c5fb64602a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744652380634334517,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deea28d5b95a0161001482c6e5fd4ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac016904a4213b99a47070ae1564d6c7a7e1e6cc619cff58846307451f64ed18,PodSandboxId:bd9fbc4430f99abae5bd6a27d98ce484f294d61d1039425ec302e0b15f3395a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744652378571514406,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xg86l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: def9df07-a567-4cbe-8d6b-ed74663dfa47,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3f34f6ca49f0c4d1604227ce426ae060a16a2d1ac34ebef098b6af589bc49a,PodSandboxId:f98c6cef99decc199466a145fb7056ae17170ee9c966ac40697e82aa8130a994,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744652377577956366,Labels:map[string]string{io.kubernetes.con
tainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a80568-0d38-47f0-b9c6-08a053834b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805e7a887e2611f8c5973feaff11ce93dc2b9dc56a8e44ad5f1bb406c0906313,PodSandboxId:82f5dad270ae466c4fa8ea754805ed9646d0d855a6a8569057728bdf56045f27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744652376572320706,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc3a8f0901dac16f097d4628c5c2d4f2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d,PodSandboxId:e503697f7815fa09bd688d65bc24793a1d6342edef7365b447202eecd6c07b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744652363380105821,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xznwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 169496de-3680-47e0-a32c-a9f93ed0b619,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e,PodSandboxId:90d974009875dab6af0f7dfcfa05e9a0c90028dbb4f98f6dab986647f9eb6bc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744652363266541332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mrtjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6ffe5a-d350-4bb7-89ae-e488b64ef60a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0783d2882d3eaa07f574337c86546d3cdd95d36c2d01d39b0a2a0e1a858ecbfb,PodSandboxId:4db82cb2f41211686cdf44519ad977d84479c8f35e4e20e096ed408f7c4f
c05b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744652359846036535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xg86l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: def9df07-a567-4cbe-8d6b-ed74663dfa47,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b3ec76929456b1ba79d34505c38d00bfc2cbde9b7f076020adf115d65893a3,PodSandboxId:8b9ef04d4cccf65a2f7a91ce47f116fc3c534030771510833c016abe6fb562db,Metadata:&ContainerMetadata{Na
me:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744652359750448395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d749dc95b3daec9b24bc3b7ad245422,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba002baf77c7c1de681006708c9550c2470fcd278442197a69e18298cf61807,PodSandboxId:d060c52e5c2eefe578ecd55d6c8853cbdbc85d812aa905136c335c05f37e95ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1
,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744652359722397145,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc3a8f0901dac16f097d4628c5c2d4f2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d93e203cbce7f74bd608f6f26e6632cbb28d85bbc51799c86e9cbd34e55c0,PodSandboxId:7705857010b687d5366127f743a0453cb6ce1684c39838de040f5015e4bfb202,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744652359643413582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed366ee78c2ae00743664ccec03057c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb8b574c605c5ed900dceb1d064ef60d0c627e5c31f718ca950ce600720ce1c,PodSandboxId:d46b277d6d9c5e9074cb273a5230e90970aec7f71f2adf698cd791ab97aaa6a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744652359681986196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deea28d5b95a0161001482c6e5fd4ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7aef00da-0e46-49ce-b05a-5549bad6fbde name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.126183346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27c8bafa-eb1c-4ba4-9a84-76868160229a name=/runtime.v1.RuntimeService/Version
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.126288923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27c8bafa-eb1c-4ba4-9a84-76868160229a name=/runtime.v1.RuntimeService/Version
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.127938012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a93e725f-49b4-4445-ad3f-307c1bb05d89 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.128700178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744652389128557930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a93e725f-49b4-4445-ad3f-307c1bb05d89 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.130223971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3bf7184-2ff8-4a2c-8b52-449ff7328931 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.130341850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3bf7184-2ff8-4a2c-8b52-449ff7328931 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.130957625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ee5ff7a67c0e7464f25fbccd88bda36a27b5044b8db2639efb961e2f4ab26a5,PodSandboxId:90d974009875dab6af0f7dfcfa05e9a0c90028dbb4f98f6dab986647f9eb6bc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744652384542102958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mrtjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6ffe5a-d350-4bb7-89ae-e488b64ef60a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85d55c410fd631ee092cd69b81b8a6d6bd1d2818b19c9d61e701c93c4a667b9c,PodSandboxId:e503697f7815fa09bd688d65bc24793a1d6342edef7365b447202eecd6c07b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744652384632276900,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xznwp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 169496de-3680-47e0-a32c-a9f93ed0b619,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5c5b7305b5d7c88c54eb3b45f6ccb598e1c6b576fd40660aa60debd89b6c44,PodSandboxId:f98c6cef99decc199466a145fb7056ae17170ee9c966ac40697e82aa8130a994,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1744652384462067761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a80568-0d38-47f0-b9c6-08a053834b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731929e3aa0bdaaee3a14fd79595e05a8464bb2d83e77db7bb0bdceaa72391c2,PodSandboxId:1a8cbde5de93f949edddbafc1e8fd3331730ff526d2375803d33cbaeced4b70b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNI
NG,CreatedAt:1744652380622189045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed366ee78c2ae00743664ccec03057c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ab3b12e65a373278e9b7b6085f348121c7b8dbe21f278c1875c898f56ebf60,PodSandboxId:5d78f3a60eee230a7f50c731b1309049c1eabf11dc805798638aa63dd28d46c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:17
44652380625082234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d749dc95b3daec9b24bc3b7ad245422,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63b442004902710a17b0cbcb8e0b210c0ac6764d5b5b6ef67c4db85934a4f70,PodSandboxId:cd1e05870f648b1fed78cdf3b65f1af07517bb287110cf0ee9bb0c5fb64602a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744652380634334517,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deea28d5b95a0161001482c6e5fd4ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac016904a4213b99a47070ae1564d6c7a7e1e6cc619cff58846307451f64ed18,PodSandboxId:bd9fbc4430f99abae5bd6a27d98ce484f294d61d1039425ec302e0b15f3395a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744652378571514406,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xg86l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: def9df07-a567-4cbe-8d6b-ed74663dfa47,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3f34f6ca49f0c4d1604227ce426ae060a16a2d1ac34ebef098b6af589bc49a,PodSandboxId:f98c6cef99decc199466a145fb7056ae17170ee9c966ac40697e82aa8130a994,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744652377577956366,Labels:map[string]string{io.kubernetes.con
tainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a80568-0d38-47f0-b9c6-08a053834b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805e7a887e2611f8c5973feaff11ce93dc2b9dc56a8e44ad5f1bb406c0906313,PodSandboxId:82f5dad270ae466c4fa8ea754805ed9646d0d855a6a8569057728bdf56045f27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744652376572320706,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc3a8f0901dac16f097d4628c5c2d4f2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d,PodSandboxId:e503697f7815fa09bd688d65bc24793a1d6342edef7365b447202eecd6c07b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744652363380105821,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xznwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 169496de-3680-47e0-a32c-a9f93ed0b619,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e,PodSandboxId:90d974009875dab6af0f7dfcfa05e9a0c90028dbb4f98f6dab986647f9eb6bc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744652363266541332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mrtjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6ffe5a-d350-4bb7-89ae-e488b64ef60a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0783d2882d3eaa07f574337c86546d3cdd95d36c2d01d39b0a2a0e1a858ecbfb,PodSandboxId:4db82cb2f41211686cdf44519ad977d84479c8f35e4e20e096ed408f7c4f
c05b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744652359846036535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xg86l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: def9df07-a567-4cbe-8d6b-ed74663dfa47,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b3ec76929456b1ba79d34505c38d00bfc2cbde9b7f076020adf115d65893a3,PodSandboxId:8b9ef04d4cccf65a2f7a91ce47f116fc3c534030771510833c016abe6fb562db,Metadata:&ContainerMetadata{Na
me:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744652359750448395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d749dc95b3daec9b24bc3b7ad245422,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba002baf77c7c1de681006708c9550c2470fcd278442197a69e18298cf61807,PodSandboxId:d060c52e5c2eefe578ecd55d6c8853cbdbc85d812aa905136c335c05f37e95ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1
,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744652359722397145,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc3a8f0901dac16f097d4628c5c2d4f2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d93e203cbce7f74bd608f6f26e6632cbb28d85bbc51799c86e9cbd34e55c0,PodSandboxId:7705857010b687d5366127f743a0453cb6ce1684c39838de040f5015e4bfb202,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744652359643413582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed366ee78c2ae00743664ccec03057c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb8b574c605c5ed900dceb1d064ef60d0c627e5c31f718ca950ce600720ce1c,PodSandboxId:d46b277d6d9c5e9074cb273a5230e90970aec7f71f2adf698cd791ab97aaa6a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744652359681986196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deea28d5b95a0161001482c6e5fd4ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3bf7184-2ff8-4a2c-8b52-449ff7328931 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.177006561Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3e7f176-6d4b-4316-b52f-6269f395b040 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.177130635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3e7f176-6d4b-4316-b52f-6269f395b040 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.178642413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b82e2f0b-60ab-44f8-9a9f-3fe07e9f52a0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.179163322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744652389179134238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b82e2f0b-60ab-44f8-9a9f-3fe07e9f52a0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.180313945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f579c10d-4f4c-4752-8985-d58e67db5ce2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.180408671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f579c10d-4f4c-4752-8985-d58e67db5ce2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:39:49 kubernetes-upgrade-771697 crio[3150]: time="2025-04-14 17:39:49.180992402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ee5ff7a67c0e7464f25fbccd88bda36a27b5044b8db2639efb961e2f4ab26a5,PodSandboxId:90d974009875dab6af0f7dfcfa05e9a0c90028dbb4f98f6dab986647f9eb6bc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744652384542102958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mrtjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6ffe5a-d350-4bb7-89ae-e488b64ef60a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85d55c410fd631ee092cd69b81b8a6d6bd1d2818b19c9d61e701c93c4a667b9c,PodSandboxId:e503697f7815fa09bd688d65bc24793a1d6342edef7365b447202eecd6c07b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744652384632276900,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xznwp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 169496de-3680-47e0-a32c-a9f93ed0b619,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5c5b7305b5d7c88c54eb3b45f6ccb598e1c6b576fd40660aa60debd89b6c44,PodSandboxId:f98c6cef99decc199466a145fb7056ae17170ee9c966ac40697e82aa8130a994,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1744652384462067761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a80568-0d38-47f0-b9c6-08a053834b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731929e3aa0bdaaee3a14fd79595e05a8464bb2d83e77db7bb0bdceaa72391c2,PodSandboxId:1a8cbde5de93f949edddbafc1e8fd3331730ff526d2375803d33cbaeced4b70b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNI
NG,CreatedAt:1744652380622189045,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed366ee78c2ae00743664ccec03057c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ab3b12e65a373278e9b7b6085f348121c7b8dbe21f278c1875c898f56ebf60,PodSandboxId:5d78f3a60eee230a7f50c731b1309049c1eabf11dc805798638aa63dd28d46c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:17
44652380625082234,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d749dc95b3daec9b24bc3b7ad245422,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63b442004902710a17b0cbcb8e0b210c0ac6764d5b5b6ef67c4db85934a4f70,PodSandboxId:cd1e05870f648b1fed78cdf3b65f1af07517bb287110cf0ee9bb0c5fb64602a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744652380634334517,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deea28d5b95a0161001482c6e5fd4ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac016904a4213b99a47070ae1564d6c7a7e1e6cc619cff58846307451f64ed18,PodSandboxId:bd9fbc4430f99abae5bd6a27d98ce484f294d61d1039425ec302e0b15f3395a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744652378571514406,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xg86l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: def9df07-a567-4cbe-8d6b-ed74663dfa47,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3f34f6ca49f0c4d1604227ce426ae060a16a2d1ac34ebef098b6af589bc49a,PodSandboxId:f98c6cef99decc199466a145fb7056ae17170ee9c966ac40697e82aa8130a994,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744652377577956366,Labels:map[string]string{io.kubernetes.con
tainer.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a80568-0d38-47f0-b9c6-08a053834b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805e7a887e2611f8c5973feaff11ce93dc2b9dc56a8e44ad5f1bb406c0906313,PodSandboxId:82f5dad270ae466c4fa8ea754805ed9646d0d855a6a8569057728bdf56045f27,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744652376572320706,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc3a8f0901dac16f097d4628c5c2d4f2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d,PodSandboxId:e503697f7815fa09bd688d65bc24793a1d6342edef7365b447202eecd6c07b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744652363380105821,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xznwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 169496de-3680-47e0-a32c-a9f93ed0b619,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e,PodSandboxId:90d974009875dab6af0f7dfcfa05e9a0c90028dbb4f98f6dab986647f9eb6bc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744652363266541332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mrtjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6ffe5a-d350-4bb7-89ae-e488b64ef60a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0783d2882d3eaa07f574337c86546d3cdd95d36c2d01d39b0a2a0e1a858ecbfb,PodSandboxId:4db82cb2f41211686cdf44519ad977d84479c8f35e4e20e096ed408f7c4f
c05b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744652359846036535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xg86l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: def9df07-a567-4cbe-8d6b-ed74663dfa47,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b3ec76929456b1ba79d34505c38d00bfc2cbde9b7f076020adf115d65893a3,PodSandboxId:8b9ef04d4cccf65a2f7a91ce47f116fc3c534030771510833c016abe6fb562db,Metadata:&ContainerMetadata{Na
me:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744652359750448395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d749dc95b3daec9b24bc3b7ad245422,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba002baf77c7c1de681006708c9550c2470fcd278442197a69e18298cf61807,PodSandboxId:d060c52e5c2eefe578ecd55d6c8853cbdbc85d812aa905136c335c05f37e95ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1
,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744652359722397145,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc3a8f0901dac16f097d4628c5c2d4f2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58d93e203cbce7f74bd608f6f26e6632cbb28d85bbc51799c86e9cbd34e55c0,PodSandboxId:7705857010b687d5366127f743a0453cb6ce1684c39838de040f5015e4bfb202,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744652359643413582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed366ee78c2ae00743664ccec03057c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb8b574c605c5ed900dceb1d064ef60d0c627e5c31f718ca950ce600720ce1c,PodSandboxId:d46b277d6d9c5e9074cb273a5230e90970aec7f71f2adf698cd791ab97aaa6a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744652359681986196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-771697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deea28d5b95a0161001482c6e5fd4ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f579c10d-4f4c-4752-8985-d58e67db5ce2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85d55c410fd63       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   2                   e503697f7815f       coredns-668d6bf9bc-xznwp
	0ee5ff7a67c0e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   2                   90d974009875d       coredns-668d6bf9bc-mrtjv
	2d5c5b7305b5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       3                   f98c6cef99dec       storage-provisioner
	a63b442004902       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   8 seconds ago       Running             kube-scheduler            2                   cd1e05870f648       kube-scheduler-kubernetes-upgrade-771697
	06ab3b12e65a3       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   8 seconds ago       Running             etcd                      2                   5d78f3a60eee2       etcd-kubernetes-upgrade-771697
	731929e3aa0bd       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   8 seconds ago       Running             kube-apiserver            2                   1a8cbde5de93f       kube-apiserver-kubernetes-upgrade-771697
	ac016904a4213       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   10 seconds ago      Running             kube-proxy                2                   bd9fbc4430f99       kube-proxy-xg86l
	cf3f34f6ca49f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       2                   f98c6cef99dec       storage-provisioner
	805e7a887e261       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   12 seconds ago      Running             kube-controller-manager   2                   82f5dad270ae4       kube-controller-manager-kubernetes-upgrade-771697
	5c768ca929414       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago      Exited              coredns                   1                   e503697f7815f       coredns-668d6bf9bc-xznwp
	5c78db9d822a2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   26 seconds ago      Exited              coredns                   1                   90d974009875d       coredns-668d6bf9bc-mrtjv
	0783d2882d3ea       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   29 seconds ago      Exited              kube-proxy                1                   4db82cb2f4121       kube-proxy-xg86l
	86b3ec7692945       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   29 seconds ago      Exited              etcd                      1                   8b9ef04d4cccf       etcd-kubernetes-upgrade-771697
	dba002baf77c7       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   29 seconds ago      Exited              kube-controller-manager   1                   d060c52e5c2ee       kube-controller-manager-kubernetes-upgrade-771697
	2fb8b574c605c       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   29 seconds ago      Exited              kube-scheduler            1                   d46b277d6d9c5       kube-scheduler-kubernetes-upgrade-771697
	b58d93e203cbc       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   29 seconds ago      Exited              kube-apiserver            1                   7705857010b68       kube-apiserver-kubernetes-upgrade-771697
	
	
	==> coredns [0ee5ff7a67c0e7464f25fbccd88bda36a27b5044b8db2639efb961e2f4ab26a5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [85d55c410fd631ee092cd69b81b8a6d6bd1d2818b19c9d61e701c93c4a667b9c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-771697
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-771697
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 17:38:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-771697
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 17:39:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 17:39:43 +0000   Mon, 14 Apr 2025 17:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 17:39:43 +0000   Mon, 14 Apr 2025 17:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 17:39:43 +0000   Mon, 14 Apr 2025 17:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 17:39:43 +0000   Mon, 14 Apr 2025 17:38:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.160
	  Hostname:    kubernetes-upgrade-771697
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 852a7fd8efc24b3fafd8e3d2d30179bc
	  System UUID:                852a7fd8-efc2-4b3f-afd8-e3d2d30179bc
	  Boot ID:                    ade0d936-4148-4863-a825-52e375172b02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-mrtjv                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     47s
	  kube-system                 coredns-668d6bf9bc-xznwp                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     47s
	  kube-system                 etcd-kubernetes-upgrade-771697                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         54s
	  kube-system                 kube-apiserver-kubernetes-upgrade-771697             250m (12%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-771697    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-xg86l                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-kubernetes-upgrade-771697             100m (5%)     0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 46s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node kubernetes-upgrade-771697 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node kubernetes-upgrade-771697 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)  kubelet          Node kubernetes-upgrade-771697 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           48s                node-controller  Node kubernetes-upgrade-771697 event: Registered Node kubernetes-upgrade-771697 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-771697 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-771697 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-771697 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-771697 event: Registered Node kubernetes-upgrade-771697 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.102425] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.062198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061600] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.192283] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.157440] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.296215] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +5.146028] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +0.092331] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.388565] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +6.686397] systemd-fstab-generator[1245]: Ignoring "noauto" option for root device
	[  +0.082533] kauditd_printk_skb: 97 callbacks suppressed
	[Apr14 17:39] kauditd_printk_skb: 18 callbacks suppressed
	[ +16.154204] systemd-fstab-generator[2256]: Ignoring "noauto" option for root device
	[  +0.126128] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.068424] systemd-fstab-generator[2268]: Ignoring "noauto" option for root device
	[  +0.242889] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.529242] systemd-fstab-generator[2499]: Ignoring "noauto" option for root device
	[  +1.354818] systemd-fstab-generator[3020]: Ignoring "noauto" option for root device
	[  +2.504507] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[ +11.354994] kauditd_printk_skb: 300 callbacks suppressed
	[  +5.301673] systemd-fstab-generator[4311]: Ignoring "noauto" option for root device
	[  +0.092885] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.831776] systemd-fstab-generator[4779]: Ignoring "noauto" option for root device
	[  +0.212890] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [06ab3b12e65a373278e9b7b6085f348121c7b8dbe21f278c1875c898f56ebf60] <==
	{"level":"info","ts":"2025-04-14T17:39:41.001837Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4f026004cefb8cff","local-member-id":"81fdf5bab04ef248","added-peer-id":"81fdf5bab04ef248","added-peer-peer-urls":["https://192.168.61.160:2380"]}
	{"level":"info","ts":"2025-04-14T17:39:41.002007Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4f026004cefb8cff","local-member-id":"81fdf5bab04ef248","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T17:39:41.002048Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T17:39:41.003811Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"81fdf5bab04ef248","initial-advertise-peer-urls":["https://192.168.61.160:2380"],"listen-peer-urls":["https://192.168.61.160:2380"],"advertise-client-urls":["https://192.168.61.160:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.160:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T17:39:41.003909Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T17:39:41.003985Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.61.160:2380"}
	{"level":"info","ts":"2025-04-14T17:39:41.004013Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.61.160:2380"}
	{"level":"info","ts":"2025-04-14T17:39:42.049392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81fdf5bab04ef248 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T17:39:42.049435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81fdf5bab04ef248 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T17:39:42.049475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81fdf5bab04ef248 received MsgPreVoteResp from 81fdf5bab04ef248 at term 2"}
	{"level":"info","ts":"2025-04-14T17:39:42.049492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81fdf5bab04ef248 became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T17:39:42.049500Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81fdf5bab04ef248 received MsgVoteResp from 81fdf5bab04ef248 at term 3"}
	{"level":"info","ts":"2025-04-14T17:39:42.049512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81fdf5bab04ef248 became leader at term 3"}
	{"level":"info","ts":"2025-04-14T17:39:42.049533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81fdf5bab04ef248 elected leader 81fdf5bab04ef248 at term 3"}
	{"level":"info","ts":"2025-04-14T17:39:42.055413Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"81fdf5bab04ef248","local-member-attributes":"{Name:kubernetes-upgrade-771697 ClientURLs:[https://192.168.61.160:2379]}","request-path":"/0/members/81fdf5bab04ef248/attributes","cluster-id":"4f026004cefb8cff","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T17:39:42.055433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T17:39:42.055671Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T17:39:42.056026Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T17:39:42.056056Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T17:39:42.056534Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T17:39:42.056779Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T17:39:42.057556Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.160:2379"}
	{"level":"info","ts":"2025-04-14T17:39:42.057868Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T17:39:47.800249Z","caller":"traceutil/trace.go:171","msg":"trace[196551425] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"248.092487ms","start":"2025-04-14T17:39:47.552123Z","end":"2025-04-14T17:39:47.800215Z","steps":["trace[196551425] 'process raft request'  (duration: 247.935486ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T17:39:47.816100Z","caller":"traceutil/trace.go:171","msg":"trace[1363464213] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"255.022431ms","start":"2025-04-14T17:39:47.561064Z","end":"2025-04-14T17:39:47.816086Z","steps":["trace[1363464213] 'process raft request'  (duration: 254.674726ms)"],"step_count":1}
	
	
	==> etcd [86b3ec76929456b1ba79d34505c38d00bfc2cbde9b7f076020adf115d65893a3] <==
	{"level":"warn","ts":"2025-04-14T17:39:20.854319Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-04-14T17:39:20.855310Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.61.160:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.61.160:2380","--initial-cluster=kubernetes-upgrade-771697=https://192.168.61.160:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.61.160:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.61.160:2380","--name=kubernetes-upgrade-771697","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2025-04-14T17:39:20.855831Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-04-14T17:39:20.855969Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-04-14T17:39:20.856218Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.61.160:2380"]}
	{"level":"info","ts":"2025-04-14T17:39:20.856374Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T17:39:20.865812Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.160:2379"]}
	{"level":"info","ts":"2025-04-14T17:39:20.871917Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-771697","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.61.160:2380"],"listen-peer-urls":["https://192.168.61.160:2380"],"advertise-client-urls":["https://192.168.61.160:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.160:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","
initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	
	
	==> kernel <==
	 17:39:49 up 1 min,  0 users,  load average: 2.15, 0.73, 0.26
	Linux kubernetes-upgrade-771697 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [731929e3aa0bdaaee3a14fd79595e05a8464bb2d83e77db7bb0bdceaa72391c2] <==
	I0414 17:39:43.607549       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 17:39:43.609295       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 17:39:43.610148       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 17:39:43.612115       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0414 17:39:43.612933       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0414 17:39:43.616406       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0414 17:39:43.618699       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 17:39:43.624086       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 17:39:43.627384       1 aggregator.go:171] initial CRD sync complete...
	I0414 17:39:43.630527       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 17:39:43.630626       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 17:39:43.630706       1 cache.go:39] Caches are synced for autoregister controller
	I0414 17:39:43.638088       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 17:39:43.660248       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 17:39:43.660542       1 policy_source.go:240] refreshing policies
	I0414 17:39:43.692031       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 17:39:44.191870       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 17:39:44.456714       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 17:39:44.769162       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 17:39:45.459071       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 17:39:45.542102       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 17:39:45.638899       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 17:39:45.647923       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 17:39:47.272084       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 17:39:47.366365       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b58d93e203cbce7f74bd608f6f26e6632cbb28d85bbc51799c86e9cbd34e55c0] <==
	W0414 17:39:20.479412       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0414 17:39:20.480066       1 options.go:238] external host was not specified, using 192.168.61.160
	I0414 17:39:20.488035       1 server.go:143] Version: v1.32.2
	I0414 17:39:20.488187       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 17:39:21.381674       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0414 17:39:21.382065       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0414 17:39:21.382187       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0414 17:39:21.393732       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 17:39:21.412180       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0414 17:39:21.412210       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0414 17:39:21.412748       1 instance.go:233] Using reconciler: lease
	W0414 17:39:21.414682       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [805e7a887e2611f8c5973feaff11ce93dc2b9dc56a8e44ad5f1bb406c0906313] <==
	I0414 17:39:47.081995       1 shared_informer.go:320] Caches are synced for attach detach
	I0414 17:39:47.082236       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0414 17:39:47.100430       1 shared_informer.go:320] Caches are synced for expand
	I0414 17:39:47.100702       1 shared_informer.go:320] Caches are synced for stateful set
	I0414 17:39:47.105679       1 shared_informer.go:320] Caches are synced for service account
	I0414 17:39:47.105776       1 shared_informer.go:320] Caches are synced for PVC protection
	I0414 17:39:47.105781       1 shared_informer.go:320] Caches are synced for endpoint
	I0414 17:39:47.109323       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 17:39:47.113377       1 shared_informer.go:320] Caches are synced for job
	I0414 17:39:47.113540       1 shared_informer.go:320] Caches are synced for daemon sets
	I0414 17:39:47.117006       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0414 17:39:47.117094       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0414 17:39:47.117174       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0414 17:39:47.119382       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0414 17:39:47.123912       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0414 17:39:47.130302       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 17:39:47.145855       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 17:39:47.156634       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0414 17:39:47.189063       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 17:39:47.189215       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 17:39:47.189228       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 17:39:47.309013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="247.296023ms"
	I0414 17:39:47.309156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="80.426µs"
	I0414 17:39:49.683367       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="32.387218ms"
	I0414 17:39:49.684812       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="67.332µs"
	
	
	==> kube-controller-manager [dba002baf77c7c1de681006708c9550c2470fcd278442197a69e18298cf61807] <==
	
	
	==> kube-proxy [0783d2882d3eaa07f574337c86546d3cdd95d36c2d01d39b0a2a0e1a858ecbfb] <==
	
	
	==> kube-proxy [ac016904a4213b99a47070ae1564d6c7a7e1e6cc619cff58846307451f64ed18] <==
	 >
	E0414 17:39:38.788127       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 17:39:38.790449       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-771697\": dial tcp 192.168.61.160:8443: connect: connection refused"
	E0414 17:39:39.880181       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-771697\": dial tcp 192.168.61.160:8443: connect: connection refused"
	I0414 17:39:43.603219       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.160"]
	E0414 17:39:43.603306       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 17:39:43.687303       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 17:39:43.687378       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 17:39:43.687412       1 server_linux.go:170] "Using iptables Proxier"
	I0414 17:39:43.692375       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 17:39:43.692845       1 server.go:497] "Version info" version="v1.32.2"
	I0414 17:39:43.693005       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 17:39:43.694428       1 config.go:199] "Starting service config controller"
	I0414 17:39:43.694521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 17:39:43.694638       1 config.go:105] "Starting endpoint slice config controller"
	I0414 17:39:43.694664       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 17:39:43.695181       1 config.go:329] "Starting node config controller"
	I0414 17:39:43.695224       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 17:39:43.795362       1 shared_informer.go:320] Caches are synced for node config
	I0414 17:39:43.795421       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 17:39:43.795526       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2fb8b574c605c5ed900dceb1d064ef60d0c627e5c31f718ca950ce600720ce1c] <==
	
	
	==> kube-scheduler [a63b442004902710a17b0cbcb8e0b210c0ac6764d5b5b6ef67c4db85934a4f70] <==
	I0414 17:39:41.496916       1 serving.go:386] Generated self-signed cert in-memory
	W0414 17:39:43.556059       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 17:39:43.556178       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 17:39:43.556221       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 17:39:43.556257       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 17:39:43.605137       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 17:39:43.606695       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 17:39:43.613015       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 17:39:43.613116       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 17:39:43.617843       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 17:39:43.617939       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 17:39:43.714448       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: E0414 17:39:43.458945    4318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-771697\" not found" node="kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:43.526118    4318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: E0414 17:39:43.683550    4318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-771697\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:43.683900    4318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: E0414 17:39:43.718365    4318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-771697\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:43.718621    4318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:43.726350    4318 kubelet_node_status.go:125] "Node was previously registered" node="kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:43.726625    4318 kubelet_node_status.go:79] "Successfully registered node" node="kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:43.726723    4318 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:43.728067    4318 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: E0414 17:39:43.738521    4318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-771697\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:43.738719    4318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-771697"
	Apr 14 17:39:43 kubernetes-upgrade-771697 kubelet[4318]: E0414 17:39:43.756777    4318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-771697\" already exists" pod="kube-system/etcd-kubernetes-upgrade-771697"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.108761    4318 apiserver.go:52] "Watching apiserver"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.137338    4318 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.174928    4318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/def9df07-a567-4cbe-8d6b-ed74663dfa47-lib-modules\") pod \"kube-proxy-xg86l\" (UID: \"def9df07-a567-4cbe-8d6b-ed74663dfa47\") " pod="kube-system/kube-proxy-xg86l"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.175033    4318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/82a80568-0d38-47f0-b9c6-08a053834b1d-tmp\") pod \"storage-provisioner\" (UID: \"82a80568-0d38-47f0-b9c6-08a053834b1d\") " pod="kube-system/storage-provisioner"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.175103    4318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/def9df07-a567-4cbe-8d6b-ed74663dfa47-xtables-lock\") pod \"kube-proxy-xg86l\" (UID: \"def9df07-a567-4cbe-8d6b-ed74663dfa47\") " pod="kube-system/kube-proxy-xg86l"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.416895    4318 scope.go:117] "RemoveContainer" containerID="cf3f34f6ca49f0c4d1604227ce426ae060a16a2d1ac34ebef098b6af589bc49a"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.419863    4318 scope.go:117] "RemoveContainer" containerID="5c768ca92941422ba9b176f307a04118c45037776fa824fbd40096035226386d"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.420101    4318 scope.go:117] "RemoveContainer" containerID="5c78db9d822a25b3e9e7cb13288c927bd7c01149ddf51d8f469a3df9f400b40e"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:44.547533    4318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-771697"
	Apr 14 17:39:44 kubernetes-upgrade-771697 kubelet[4318]: E0414 17:39:44.573857    4318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-771697\" already exists" pod="kube-system/etcd-kubernetes-upgrade-771697"
	Apr 14 17:39:46 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:46.583263    4318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 14 17:39:49 kubernetes-upgrade-771697 kubelet[4318]: I0414 17:39:49.624776    4318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [2d5c5b7305b5d7c88c54eb3b45f6ccb598e1c6b576fd40660aa60debd89b6c44] <==
	I0414 17:39:44.707444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 17:39:44.752271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 17:39:44.752734       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 17:39:44.784325       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 17:39:44.784912       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"39faedd8-919c-4bd0-a862-c651fa708bbc", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-771697_b846c6d3-bbee-4d2b-add1-dc3bab7f885c became leader
	I0414 17:39:44.785148       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-771697_b846c6d3-bbee-4d2b-add1-dc3bab7f885c!
	I0414 17:39:44.887833       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-771697_b846c6d3-bbee-4d2b-add1-dc3bab7f885c!
	
	
	==> storage-provisioner [cf3f34f6ca49f0c4d1604227ce426ae060a16a2d1ac34ebef098b6af589bc49a] <==
	I0414 17:39:37.680427       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0414 17:39:37.686489       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-771697 -n kubernetes-upgrade-771697
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-771697 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-771697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-771697
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-771697: (1.739421615s)
--- FAIL: TestKubernetesUpgrade (423.47s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-439119 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-439119 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.925900237s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-439119] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-439119" primary control-plane node in "pause-439119" cluster
	* Updating the running kvm2 "pause-439119" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-439119" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:34:07.686432  195612 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:34:07.686544  195612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:34:07.686556  195612 out.go:358] Setting ErrFile to fd 2...
	I0414 17:34:07.686563  195612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:34:07.686856  195612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:34:07.687524  195612 out.go:352] Setting JSON to false
	I0414 17:34:07.688867  195612 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8146,"bootTime":1744643902,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:34:07.688984  195612 start.go:139] virtualization: kvm guest
	I0414 17:34:07.690982  195612 out.go:177] * [pause-439119] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:34:07.692223  195612 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:34:07.692219  195612 notify.go:220] Checking for updates...
	I0414 17:34:07.694479  195612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:34:07.695657  195612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:34:07.696815  195612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:34:07.698027  195612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:34:07.699100  195612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:34:07.701042  195612 config.go:182] Loaded profile config "pause-439119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:34:07.701623  195612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:07.701693  195612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:07.718132  195612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0414 17:34:07.718676  195612 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:07.719197  195612 main.go:141] libmachine: Using API Version  1
	I0414 17:34:07.719219  195612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:07.719539  195612 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:07.719761  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:07.720001  195612 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:34:07.720446  195612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:07.720504  195612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:07.736247  195612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I0414 17:34:07.736804  195612 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:07.737452  195612 main.go:141] libmachine: Using API Version  1
	I0414 17:34:07.737500  195612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:07.737842  195612 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:07.738031  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:07.776690  195612 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 17:34:07.777803  195612 start.go:297] selected driver: kvm2
	I0414 17:34:07.777880  195612 start.go:901] validating driver "kvm2" against &{Name:pause-439119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-439119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:
false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:34:07.778082  195612 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:34:07.778526  195612 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:34:07.778644  195612 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:34:07.794112  195612 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:34:07.795158  195612 cni.go:84] Creating CNI manager for ""
	I0414 17:34:07.795222  195612 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:07.795286  195612 start.go:340] cluster config:
	{Name:pause-439119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-439119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliase
s:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:34:07.795481  195612 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:34:07.797913  195612 out.go:177] * Starting "pause-439119" primary control-plane node in "pause-439119" cluster
	I0414 17:34:07.798982  195612 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:34:07.799011  195612 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 17:34:07.799020  195612 cache.go:56] Caching tarball of preloaded images
	I0414 17:34:07.799096  195612 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:34:07.799110  195612 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 17:34:07.799231  195612 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119/config.json ...
	I0414 17:34:07.799430  195612 start.go:360] acquireMachinesLock for pause-439119: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:34:22.690439  195612 start.go:364] duration metric: took 14.890957265s to acquireMachinesLock for "pause-439119"
	I0414 17:34:22.690505  195612 start.go:96] Skipping create...Using existing machine configuration
	I0414 17:34:22.690513  195612 fix.go:54] fixHost starting: 
	I0414 17:34:22.690987  195612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:22.691042  195612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:22.710262  195612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39869
	I0414 17:34:22.710657  195612 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:22.711311  195612 main.go:141] libmachine: Using API Version  1
	I0414 17:34:22.711335  195612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:22.711748  195612 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:22.711989  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:22.712166  195612 main.go:141] libmachine: (pause-439119) Calling .GetState
	I0414 17:34:22.713774  195612 fix.go:112] recreateIfNeeded on pause-439119: state=Running err=<nil>
	W0414 17:34:22.713803  195612 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 17:34:22.715634  195612 out.go:177] * Updating the running kvm2 "pause-439119" VM ...
	I0414 17:34:22.716731  195612 machine.go:93] provisionDockerMachine start ...
	I0414 17:34:22.716749  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:22.716929  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:22.719441  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:22.720076  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:22.720103  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:22.720248  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:22.720413  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:22.720535  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:22.720648  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:22.720795  195612 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:22.721145  195612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0414 17:34:22.721176  195612 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:34:22.838751  195612 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-439119
	
	I0414 17:34:22.838783  195612 main.go:141] libmachine: (pause-439119) Calling .GetMachineName
	I0414 17:34:22.839011  195612 buildroot.go:166] provisioning hostname "pause-439119"
	I0414 17:34:22.839037  195612 main.go:141] libmachine: (pause-439119) Calling .GetMachineName
	I0414 17:34:22.839236  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:22.841808  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:22.842185  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:22.842210  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:22.842395  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:22.842605  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:22.842757  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:22.842887  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:22.843045  195612 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:22.843243  195612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0414 17:34:22.843260  195612 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-439119 && echo "pause-439119" | sudo tee /etc/hostname
	I0414 17:34:22.968864  195612 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-439119
	
	I0414 17:34:22.968898  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:22.971845  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:22.972219  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:22.972259  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:22.972400  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:22.972684  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:22.972887  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:22.973084  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:22.973247  195612 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:22.973542  195612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0414 17:34:22.973570  195612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-439119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-439119/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-439119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:34:23.086373  195612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:34:23.086417  195612 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:34:23.086453  195612 buildroot.go:174] setting up certificates
	I0414 17:34:23.086468  195612 provision.go:84] configureAuth start
	I0414 17:34:23.086481  195612 main.go:141] libmachine: (pause-439119) Calling .GetMachineName
	I0414 17:34:23.086743  195612 main.go:141] libmachine: (pause-439119) Calling .GetIP
	I0414 17:34:23.089431  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:23.089812  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:23.089868  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:23.090045  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:23.092679  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:23.093058  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:23.093089  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:23.093186  195612 provision.go:143] copyHostCerts
	I0414 17:34:23.093241  195612 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:34:23.093258  195612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:34:23.093311  195612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:34:23.093399  195612 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:34:23.093408  195612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:34:23.093438  195612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:34:23.093513  195612 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:34:23.093523  195612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:34:23.093548  195612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:34:23.093595  195612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.pause-439119 san=[127.0.0.1 192.168.50.34 localhost minikube pause-439119]
	I0414 17:34:23.699293  195612 provision.go:177] copyRemoteCerts
	I0414 17:34:23.699358  195612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:34:23.699384  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:23.701884  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:23.702196  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:23.702227  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:23.702398  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:23.702579  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:23.702715  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:23.702827  195612 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/pause-439119/id_rsa Username:docker}
	I0414 17:34:23.788138  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:34:23.816824  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 17:34:23.844433  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 17:34:23.867814  195612 provision.go:87] duration metric: took 781.331708ms to configureAuth
	I0414 17:34:23.867846  195612 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:34:23.868036  195612 config.go:182] Loaded profile config "pause-439119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:34:23.868107  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:23.870796  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:23.871151  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:23.871181  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:23.871310  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:23.871524  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:23.871723  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:23.871880  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:23.872062  195612 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:23.872286  195612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0414 17:34:23.872312  195612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:34:29.452001  195612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:34:29.452065  195612 machine.go:96] duration metric: took 6.735288252s to provisionDockerMachine
	I0414 17:34:29.452082  195612 start.go:293] postStartSetup for "pause-439119" (driver="kvm2")
	I0414 17:34:29.452097  195612 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:34:29.452126  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:29.452505  195612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:34:29.452543  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:29.455457  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.455873  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:29.455904  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.456061  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:29.456235  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:29.456366  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:29.456504  195612 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/pause-439119/id_rsa Username:docker}
	I0414 17:34:29.541759  195612 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:34:29.546883  195612 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:34:29.546915  195612 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:34:29.546986  195612 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:34:29.547108  195612 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:34:29.547281  195612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:34:29.557982  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:34:29.585590  195612 start.go:296] duration metric: took 133.48982ms for postStartSetup
	I0414 17:34:29.585635  195612 fix.go:56] duration metric: took 6.895122059s for fixHost
	I0414 17:34:29.585659  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:29.588689  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.588994  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:29.589024  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.589139  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:29.589356  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:29.589513  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:29.589609  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:29.589752  195612 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:29.590047  195612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0414 17:34:29.590061  195612 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:34:29.712980  195612 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652069.699465900
	
	I0414 17:34:29.713010  195612 fix.go:216] guest clock: 1744652069.699465900
	I0414 17:34:29.713017  195612 fix.go:229] Guest: 2025-04-14 17:34:29.6994659 +0000 UTC Remote: 2025-04-14 17:34:29.585640132 +0000 UTC m=+21.951684649 (delta=113.825768ms)
	I0414 17:34:29.713056  195612 fix.go:200] guest clock delta is within tolerance: 113.825768ms
	I0414 17:34:29.713064  195612 start.go:83] releasing machines lock for "pause-439119", held for 7.022580847s
	I0414 17:34:29.713108  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:29.713425  195612 main.go:141] libmachine: (pause-439119) Calling .GetIP
	I0414 17:34:29.716471  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.716897  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:29.716925  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.717248  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:29.717839  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:29.718040  195612 main.go:141] libmachine: (pause-439119) Calling .DriverName
	I0414 17:34:29.718176  195612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:34:29.718249  195612 ssh_runner.go:195] Run: cat /version.json
	I0414 17:34:29.718276  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:29.718286  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHHostname
	I0414 17:34:29.721391  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.721742  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:29.721769  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.721974  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:29.722149  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.722175  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:29.722370  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:29.722538  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:29.722560  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:29.722554  195612 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/pause-439119/id_rsa Username:docker}
	I0414 17:34:29.722846  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHPort
	I0414 17:34:29.723027  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHKeyPath
	I0414 17:34:29.723191  195612 main.go:141] libmachine: (pause-439119) Calling .GetSSHUsername
	I0414 17:34:29.723331  195612 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/pause-439119/id_rsa Username:docker}
	I0414 17:34:29.829431  195612 ssh_runner.go:195] Run: systemctl --version
	I0414 17:34:29.835670  195612 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:34:29.998829  195612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:34:30.006680  195612 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:34:30.006780  195612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:34:30.018334  195612 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0414 17:34:30.018357  195612 start.go:495] detecting cgroup driver to use...
	I0414 17:34:30.018438  195612 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:34:30.041108  195612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:34:30.061878  195612 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:34:30.061963  195612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:34:30.084517  195612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:34:30.101911  195612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:34:30.255905  195612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:34:30.397998  195612 docker.go:233] disabling docker service ...
	I0414 17:34:30.398069  195612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:34:30.416637  195612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:34:30.441694  195612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:34:30.587012  195612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:34:30.721746  195612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:34:30.739363  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:34:30.769812  195612 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 17:34:30.769922  195612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:30.783301  195612 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:34:30.783378  195612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:30.796608  195612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:30.808873  195612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:30.820874  195612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:34:30.833133  195612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:30.846026  195612 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:30.857581  195612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:30.868346  195612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:34:30.878457  195612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:34:30.889146  195612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:34:31.028643  195612 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:34:34.206557  195612 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.177818254s)
	I0414 17:34:34.206600  195612 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:34:34.206661  195612 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:34:34.212106  195612 start.go:563] Will wait 60s for crictl version
	I0414 17:34:34.212156  195612 ssh_runner.go:195] Run: which crictl
	I0414 17:34:34.217218  195612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:34:34.255056  195612 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:34:34.255134  195612 ssh_runner.go:195] Run: crio --version
	I0414 17:34:34.283351  195612 ssh_runner.go:195] Run: crio --version
	I0414 17:34:34.312765  195612 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 17:34:34.314113  195612 main.go:141] libmachine: (pause-439119) Calling .GetIP
	I0414 17:34:34.316503  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:34.316808  195612 main.go:141] libmachine: (pause-439119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:d9:11", ip: ""} in network mk-pause-439119: {Iface:virbr2 ExpiryTime:2025-04-14 18:32:57 +0000 UTC Type:0 Mac:52:54:00:ae:d9:11 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-439119 Clientid:01:52:54:00:ae:d9:11}
	I0414 17:34:34.316847  195612 main.go:141] libmachine: (pause-439119) DBG | domain pause-439119 has defined IP address 192.168.50.34 and MAC address 52:54:00:ae:d9:11 in network mk-pause-439119
	I0414 17:34:34.317069  195612 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 17:34:34.321490  195612 kubeadm.go:883] updating cluster {Name:pause-439119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-439119 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:34:34.321648  195612 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:34:34.321709  195612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:34:34.369171  195612 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:34:34.369196  195612 crio.go:433] Images already preloaded, skipping extraction
	I0414 17:34:34.369249  195612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:34:34.405031  195612 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:34:34.405062  195612 cache_images.go:84] Images are preloaded, skipping loading
	I0414 17:34:34.405080  195612 kubeadm.go:934] updating node { 192.168.50.34 8443 v1.32.2 crio true true} ...
	I0414 17:34:34.405209  195612 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-439119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-439119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:34:34.405273  195612 ssh_runner.go:195] Run: crio config
	I0414 17:34:34.454896  195612 cni.go:84] Creating CNI manager for ""
	I0414 17:34:34.454920  195612 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:34.454931  195612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:34:34.454949  195612 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.34 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-439119 NodeName:pause-439119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 17:34:34.455064  195612 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-439119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.34"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.34"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:34:34.455122  195612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 17:34:34.466240  195612 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:34:34.466298  195612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:34:34.476081  195612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0414 17:34:34.492941  195612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:34:34.508878  195612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0414 17:34:34.524785  195612 ssh_runner.go:195] Run: grep 192.168.50.34	control-plane.minikube.internal$ /etc/hosts
	I0414 17:34:34.528599  195612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:34:34.657640  195612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:34:34.673073  195612 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119 for IP: 192.168.50.34
	I0414 17:34:34.673095  195612 certs.go:194] generating shared ca certs ...
	I0414 17:34:34.673120  195612 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:34.673311  195612 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:34:34.673372  195612 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:34:34.673388  195612 certs.go:256] generating profile certs ...
	I0414 17:34:34.673498  195612 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119/client.key
	I0414 17:34:34.673589  195612 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119/apiserver.key.a4fb6a0c
	I0414 17:34:34.673647  195612 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119/proxy-client.key
	I0414 17:34:34.673817  195612 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:34:34.673888  195612 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:34:34.673913  195612 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:34:34.673955  195612 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:34:34.673992  195612 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:34:34.674029  195612 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:34:34.674095  195612 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:34:34.674879  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:34:34.702305  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:34:34.725601  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:34:34.748529  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:34:34.771705  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 17:34:34.794665  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 17:34:34.817890  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:34:34.840970  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/pause-439119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 17:34:34.863971  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:34:34.886637  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:34:34.912890  195612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:34:34.940272  195612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:34:34.957499  195612 ssh_runner.go:195] Run: openssl version
	I0414 17:34:34.963262  195612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:34:34.975150  195612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:34:34.979658  195612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:34:34.979702  195612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:34:34.985732  195612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:34:34.995736  195612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:34:35.007712  195612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:34:35.012563  195612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:34:35.012603  195612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:34:35.018491  195612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:34:35.028083  195612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:34:35.087947  195612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:34:35.101350  195612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:34:35.101419  195612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:34:35.108310  195612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:34:35.133711  195612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:34:35.149925  195612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:34:35.184050  195612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:34:35.199611  195612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:34:35.241888  195612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:34:35.289864  195612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:34:35.326307  195612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:34:35.350922  195612 kubeadm.go:392] StartCluster: {Name:pause-439119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-439119 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security
-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:34:35.351090  195612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:34:35.351163  195612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:34:35.608870  195612 cri.go:89] found id: "3bc0057d293116cbacb0e6a632962de9f744905af529cbf11dfc93eeacf9a0d4"
	I0414 17:34:35.608903  195612 cri.go:89] found id: "0c307b9e5210eb1e23710b4e462cee0a098a90d2637fe1b1029ca098bca51677"
	I0414 17:34:35.608910  195612 cri.go:89] found id: "2bd6f1e96009b1b8dfed3791c2e9154c1236ac2bf8422dcda647f8e121057270"
	I0414 17:34:35.608915  195612 cri.go:89] found id: "9200698d6793d9e32d38549491348c8d4035256fa82dca3a8b5e4b0ee0e8a41c"
	I0414 17:34:35.608920  195612 cri.go:89] found id: "0cb3bfdc0ba0f8f647802002426ed3c1d5fcad228cf0103753c80d5f8c99342e"
	I0414 17:34:35.608925  195612 cri.go:89] found id: "7c13b8ac87bdb3aaf358a85d2cb56ea2baf6ba3bfe0a67da619290fb381ae980"
	I0414 17:34:35.608929  195612 cri.go:89] found id: ""
	I0414 17:34:35.608986  195612 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-439119 -n pause-439119
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-439119 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-439119 logs -n 25: (1.306430482s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo docker                         | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo find                           | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo crio                           | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-993774                                     | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC | 14 Apr 25 17:32 UTC |
	| start   | -p kubernetes-upgrade-771697                         | kubernetes-upgrade-771697 | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-900958 sudo                          | NoKubernetes-900958       | jenkins | v1.35.0 | 14 Apr 25 17:33 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-900958                               | NoKubernetes-900958       | jenkins | v1.35.0 | 14 Apr 25 17:33 UTC | 14 Apr 25 17:33 UTC |
	| start   | -p stopped-upgrade-328583                            | minikube                  | jenkins | v1.26.0 | 14 Apr 25 17:33 UTC | 14 Apr 25 17:34 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	| start   | -p pause-439119                                      | pause-439119              | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:35 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-328583 stop                          | minikube                  | jenkins | v1.26.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	| start   | -p cert-expiration-560919                            | cert-expiration-560919    | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-328583                            | stopped-upgrade-328583    | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:34:51
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:34:51.472588  196026 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:34:51.472843  196026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:34:51.472853  196026 out.go:358] Setting ErrFile to fd 2...
	I0414 17:34:51.472858  196026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:34:51.473027  196026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:34:51.473538  196026 out.go:352] Setting JSON to false
	I0414 17:34:51.474517  196026 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8189,"bootTime":1744643902,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:34:51.474604  196026 start.go:139] virtualization: kvm guest
	I0414 17:34:51.476439  196026 out.go:177] * [stopped-upgrade-328583] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:34:51.477586  196026 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:34:51.477583  196026 notify.go:220] Checking for updates...
	I0414 17:34:51.479680  196026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:34:51.480726  196026 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:34:51.481794  196026 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:34:51.482889  196026 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:34:51.483981  196026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:34:51.485553  196026 config.go:182] Loaded profile config "stopped-upgrade-328583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0414 17:34:51.486043  196026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:51.486111  196026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:51.500926  196026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0414 17:34:51.501412  196026 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:51.502037  196026 main.go:141] libmachine: Using API Version  1
	I0414 17:34:51.502058  196026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:51.502387  196026 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:51.502567  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .DriverName
	I0414 17:34:51.504190  196026 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 17:34:51.505346  196026 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:34:51.505615  196026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:51.505663  196026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:51.519778  196026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0414 17:34:51.520141  196026 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:51.520522  196026 main.go:141] libmachine: Using API Version  1
	I0414 17:34:51.520554  196026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:51.520932  196026 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:51.521117  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .DriverName
	I0414 17:34:51.554557  196026 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 17:34:51.555891  196026 start.go:297] selected driver: kvm2
	I0414 17:34:51.555910  196026 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-328583 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-328
583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0414 17:34:51.556025  196026 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:34:51.557006  196026 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:34:51.557112  196026 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:34:51.571540  196026 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:34:51.572040  196026 cni.go:84] Creating CNI manager for ""
	I0414 17:34:51.572115  196026 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:51.572185  196026 start.go:340] cluster config:
	{Name:stopped-upgrade-328583 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-328583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0414 17:34:51.572316  196026 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:34:51.574537  196026 out.go:177] * Starting "stopped-upgrade-328583" primary control-plane node in "stopped-upgrade-328583" cluster
	I0414 17:34:48.017142  195612 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.143431617s)
	I0414 17:34:48.017186  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:34:48.259854  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:34:48.341377  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:34:48.439103  195612 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:34:48.439193  195612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:34:48.939673  195612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:34:49.439906  195612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:34:49.454914  195612 api_server.go:72] duration metric: took 1.015814478s to wait for apiserver process to appear ...
	I0414 17:34:49.454943  195612 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:34:49.454966  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:51.936797  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:34:51.936839  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:34:51.936856  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:52.005230  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:34:52.005266  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:34:52.005285  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:52.011919  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:34:52.011947  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:34:52.455576  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:52.460756  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:34:52.460785  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:34:52.955703  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:52.959592  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:34:52.959616  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:34:53.455255  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:53.459187  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 200:
	ok
	I0414 17:34:53.465246  195612 api_server.go:141] control plane version: v1.32.2
	I0414 17:34:53.465269  195612 api_server.go:131] duration metric: took 4.01031956s to wait for apiserver health ...
	I0414 17:34:53.465278  195612 cni.go:84] Creating CNI manager for ""
	I0414 17:34:53.465284  195612 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:53.466854  195612 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:34:50.060159  195953 machine.go:93] provisionDockerMachine start ...
	I0414 17:34:50.060173  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:50.060360  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.063288  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.063772  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.063791  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.063977  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.064171  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.064303  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.064413  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.064542  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:50.064842  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:50.064849  195953 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:34:50.188772  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-560919
	
	I0414 17:34:50.188809  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetMachineName
	I0414 17:34:50.189049  195953 buildroot.go:166] provisioning hostname "cert-expiration-560919"
	I0414 17:34:50.189067  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetMachineName
	I0414 17:34:50.189286  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.192828  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.193299  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.193315  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.193562  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.193743  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.193960  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.194147  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.194403  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:50.194741  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:50.194756  195953 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-560919 && echo "cert-expiration-560919" | sudo tee /etc/hostname
	I0414 17:34:50.331018  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-560919
	
	I0414 17:34:50.331038  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.334233  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.334631  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.334650  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.334896  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.335067  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.335194  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.335335  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.335487  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:50.335744  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:50.335758  195953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-560919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-560919/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-560919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:34:50.459223  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:34:50.459257  195953 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:34:50.459275  195953 buildroot.go:174] setting up certificates
	I0414 17:34:50.459285  195953 provision.go:84] configureAuth start
	I0414 17:34:50.459296  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetMachineName
	I0414 17:34:50.459551  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetIP
	I0414 17:34:50.461958  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.462270  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.462297  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.462464  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.464676  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.465031  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.465051  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.465150  195953 provision.go:143] copyHostCerts
	I0414 17:34:50.465199  195953 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:34:50.465214  195953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:34:50.465280  195953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:34:50.465361  195953 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:34:50.465365  195953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:34:50.465386  195953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:34:50.465430  195953 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:34:50.465433  195953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:34:50.465446  195953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:34:50.465482  195953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-560919 san=[127.0.0.1 192.168.72.83 cert-expiration-560919 localhost minikube]
	I0414 17:34:50.596868  195953 provision.go:177] copyRemoteCerts
	I0414 17:34:50.596926  195953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:34:50.596946  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.599588  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.599869  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.599890  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.600033  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.600195  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.600329  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.600415  195953 sshutil.go:53] new ssh client: &{IP:192.168.72.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/cert-expiration-560919/id_rsa Username:docker}
	I0414 17:34:50.686325  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:34:50.710764  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 17:34:50.738833  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 17:34:50.769926  195953 provision.go:87] duration metric: took 310.626309ms to configureAuth
	I0414 17:34:50.769946  195953 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:34:50.770154  195953 config.go:182] Loaded profile config "cert-expiration-560919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:34:50.770224  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.775628  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.776132  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.776164  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.776336  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.776495  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.776654  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.776778  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.776945  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:50.777208  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:50.777221  195953 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:34:54.586250  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:34:54.586479  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:34:51.575471  196026 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0414 17:34:51.575511  196026 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0414 17:34:51.575530  196026 cache.go:56] Caching tarball of preloaded images
	I0414 17:34:51.575620  196026 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:34:51.575634  196026 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0414 17:34:51.575737  196026 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/stopped-upgrade-328583/config.json ...
	I0414 17:34:51.575964  196026 start.go:360] acquireMachinesLock for stopped-upgrade-328583: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:34:56.554219  196026 start.go:364] duration metric: took 4.978206565s to acquireMachinesLock for "stopped-upgrade-328583"
	I0414 17:34:56.554281  196026 start.go:96] Skipping create...Using existing machine configuration
	I0414 17:34:56.554289  196026 fix.go:54] fixHost starting: 
	I0414 17:34:56.554690  196026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:56.554746  196026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:56.571831  196026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0414 17:34:56.572320  196026 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:56.572766  196026 main.go:141] libmachine: Using API Version  1
	I0414 17:34:56.572790  196026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:56.573123  196026 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:56.573311  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .DriverName
	I0414 17:34:56.573460  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .GetState
	I0414 17:34:56.574838  196026 fix.go:112] recreateIfNeeded on stopped-upgrade-328583: state=Stopped err=<nil>
	I0414 17:34:56.574878  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .DriverName
	W0414 17:34:56.575026  196026 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 17:34:56.576736  196026 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-328583" ...
	I0414 17:34:53.468031  195612 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:34:53.478826  195612 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:34:53.495976  195612 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:34:53.499491  195612 system_pods.go:59] 6 kube-system pods found
	I0414 17:34:53.499522  195612 system_pods.go:61] "coredns-668d6bf9bc-xszpz" [af69601c-aaa4-4616-b17a-b7ffdeace7db] Running
	I0414 17:34:53.499532  195612 system_pods.go:61] "etcd-pause-439119" [4552ba2f-bcd9-4812-8f56-073d4303a1fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 17:34:53.499538  195612 system_pods.go:61] "kube-apiserver-pause-439119" [e83888e8-d458-4032-b9cb-3b5e58ad38e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 17:34:53.499545  195612 system_pods.go:61] "kube-controller-manager-pause-439119" [111c0e4d-4adf-4af4-b849-11a37fb2a9f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 17:34:53.499549  195612 system_pods.go:61] "kube-proxy-n9vxg" [6e5c41ec-d5b4-4578-9ad2-7e24118ebe43] Running
	I0414 17:34:53.499553  195612 system_pods.go:61] "kube-scheduler-pause-439119" [b9f8e14a-a971-44ae-bcc8-662699aaf178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 17:34:53.499563  195612 system_pods.go:74] duration metric: took 3.570183ms to wait for pod list to return data ...
	I0414 17:34:53.499574  195612 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:34:53.501785  195612 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:34:53.501823  195612 node_conditions.go:123] node cpu capacity is 2
	I0414 17:34:53.501856  195612 node_conditions.go:105] duration metric: took 2.273422ms to run NodePressure ...
	I0414 17:34:53.501875  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:34:53.770368  195612 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 17:34:53.773990  195612 kubeadm.go:739] kubelet initialised
	I0414 17:34:53.774009  195612 kubeadm.go:740] duration metric: took 3.614795ms waiting for restarted kubelet to initialise ...
	I0414 17:34:53.774016  195612 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:34:53.776355  195612 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-xszpz" in "kube-system" namespace to be "Ready" ...
	I0414 17:34:53.780023  195612 pod_ready.go:93] pod "coredns-668d6bf9bc-xszpz" in "kube-system" namespace has status "Ready":"True"
	I0414 17:34:53.780041  195612 pod_ready.go:82] duration metric: took 3.660443ms for pod "coredns-668d6bf9bc-xszpz" in "kube-system" namespace to be "Ready" ...
	I0414 17:34:53.780051  195612 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-439119" in "kube-system" namespace to be "Ready" ...
	I0414 17:34:55.785355  195612 pod_ready.go:103] pod "etcd-pause-439119" in "kube-system" namespace has status "Ready":"False"
	I0414 17:34:56.326755  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:34:56.326772  195953 machine.go:96] duration metric: took 6.26660498s to provisionDockerMachine
	I0414 17:34:56.326784  195953 start.go:293] postStartSetup for "cert-expiration-560919" (driver="kvm2")
	I0414 17:34:56.326801  195953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:34:56.326829  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.327136  195953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:34:56.327176  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:56.329907  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.330282  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.330306  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.330438  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:56.330619  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.330750  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:56.330857  195953 sshutil.go:53] new ssh client: &{IP:192.168.72.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/cert-expiration-560919/id_rsa Username:docker}
	I0414 17:34:56.411687  195953 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:34:56.415974  195953 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:34:56.415984  195953 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:34:56.416039  195953 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:34:56.416122  195953 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:34:56.416217  195953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:34:56.425202  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:34:56.449143  195953 start.go:296] duration metric: took 122.343306ms for postStartSetup
	I0414 17:34:56.449164  195953 fix.go:56] duration metric: took 6.410510483s for fixHost
	I0414 17:34:56.449184  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:56.451865  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.452205  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.452226  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.452386  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:56.452565  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.452713  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.452824  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:56.452958  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:56.453153  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:56.453157  195953 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:34:56.554093  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652096.531510972
	
	I0414 17:34:56.554105  195953 fix.go:216] guest clock: 1744652096.531510972
	I0414 17:34:56.554112  195953 fix.go:229] Guest: 2025-04-14 17:34:56.531510972 +0000 UTC Remote: 2025-04-14 17:34:56.449166475 +0000 UTC m=+6.579917298 (delta=82.344497ms)
	I0414 17:34:56.554133  195953 fix.go:200] guest clock delta is within tolerance: 82.344497ms
	I0414 17:34:56.554138  195953 start.go:83] releasing machines lock for "cert-expiration-560919", held for 6.515492494s
	I0414 17:34:56.554163  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.554363  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetIP
	I0414 17:34:56.557089  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.557485  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.557505  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.557625  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.558123  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.558279  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.558364  195953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:34:56.558398  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:56.558497  195953 ssh_runner.go:195] Run: cat /version.json
	I0414 17:34:56.558518  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:56.560997  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.561363  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.561383  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.561406  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.561658  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:56.561844  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.561906  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.561926  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.561965  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:56.562042  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:56.562091  195953 sshutil.go:53] new ssh client: &{IP:192.168.72.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/cert-expiration-560919/id_rsa Username:docker}
	I0414 17:34:56.562153  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.562261  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:56.562366  195953 sshutil.go:53] new ssh client: &{IP:192.168.72.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/cert-expiration-560919/id_rsa Username:docker}
	I0414 17:34:56.638553  195953 ssh_runner.go:195] Run: systemctl --version
	I0414 17:34:56.660455  195953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:34:56.819730  195953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:34:56.828626  195953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:34:56.828696  195953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:34:56.841453  195953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0414 17:34:56.841467  195953 start.go:495] detecting cgroup driver to use...
	I0414 17:34:56.841536  195953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:34:56.861035  195953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:34:56.877878  195953 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:34:56.877921  195953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:34:56.891379  195953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:34:56.905729  195953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:34:57.064738  195953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:34:57.219177  195953 docker.go:233] disabling docker service ...
	I0414 17:34:57.219223  195953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:34:57.239267  195953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:34:57.253972  195953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:34:57.405702  195953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:34:57.555224  195953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:34:57.570019  195953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:34:57.588825  195953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 17:34:57.588876  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.599673  195953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:34:57.599722  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.610348  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.620806  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.631961  195953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:34:57.642978  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.653515  195953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.663999  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.674251  195953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:34:57.683834  195953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:34:57.694332  195953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:34:57.834285  195953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:34:58.932396  195953 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.098088682s)
	I0414 17:34:58.932422  195953 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:34:58.932481  195953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:34:58.937896  195953 start.go:563] Will wait 60s for crictl version
	I0414 17:34:58.937954  195953 ssh_runner.go:195] Run: which crictl
	I0414 17:34:58.941994  195953 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:34:58.984684  195953 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:34:58.984769  195953 ssh_runner.go:195] Run: crio --version
	I0414 17:34:59.013287  195953 ssh_runner.go:195] Run: crio --version
	I0414 17:34:59.045528  195953 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 17:34:59.046637  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetIP
	I0414 17:34:59.049720  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:59.050137  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:59.050160  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:59.050385  195953 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 17:34:59.054822  195953 kubeadm.go:883] updating cluster {Name:cert-expiration-560919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-ex
piration-560919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.83 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:34:59.054930  195953 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:34:59.054976  195953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:34:59.103706  195953 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:34:59.103717  195953 crio.go:433] Images already preloaded, skipping extraction
	I0414 17:34:59.103776  195953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:34:59.139082  195953 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:34:59.139094  195953 cache_images.go:84] Images are preloaded, skipping loading
	I0414 17:34:59.139099  195953 kubeadm.go:934] updating node { 192.168.72.83 8443 v1.32.2 crio true true} ...
	I0414 17:34:59.139183  195953 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-560919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:cert-expiration-560919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:34:59.139241  195953 ssh_runner.go:195] Run: crio config
	I0414 17:34:59.190219  195953 cni.go:84] Creating CNI manager for ""
	I0414 17:34:59.190230  195953 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:59.190241  195953 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:34:59.190258  195953 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.83 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-560919 NodeName:cert-expiration-560919 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 17:34:59.190368  195953 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-560919"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.83"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.83"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:34:59.190418  195953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 17:34:59.201886  195953 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:34:59.201950  195953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:34:59.212873  195953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0414 17:34:59.229759  195953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:34:59.247390  195953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I0414 17:34:59.264986  195953 ssh_runner.go:195] Run: grep 192.168.72.83	control-plane.minikube.internal$ /etc/hosts
	I0414 17:34:59.269112  195953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:34:59.412313  195953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:34:59.429036  195953 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919 for IP: 192.168.72.83
	I0414 17:34:59.429054  195953 certs.go:194] generating shared ca certs ...
	I0414 17:34:59.429085  195953 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:59.429356  195953 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:34:59.429419  195953 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:34:59.429427  195953 certs.go:256] generating profile certs ...
	W0414 17:34:59.429583  195953 out.go:270] ! Certificate client.crt has expired. Generating a new one...
	I0414 17:34:59.429621  195953 certs.go:624] cert expired /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt: expiration: 2025-04-14 17:34:36 +0000 UTC, now: 2025-04-14 17:34:59.429604714 +0000 UTC m=+9.560355557
	I0414 17:34:59.429804  195953 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key
	I0414 17:34:59.429854  195953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt with IP's: []
	I0414 17:34:56.577902  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .Start
	I0414 17:34:56.578059  196026 main.go:141] libmachine: (stopped-upgrade-328583) starting domain...
	I0414 17:34:56.578076  196026 main.go:141] libmachine: (stopped-upgrade-328583) ensuring networks are active...
	I0414 17:34:56.578817  196026 main.go:141] libmachine: (stopped-upgrade-328583) Ensuring network default is active
	I0414 17:34:56.579157  196026 main.go:141] libmachine: (stopped-upgrade-328583) Ensuring network mk-stopped-upgrade-328583 is active
	I0414 17:34:56.579546  196026 main.go:141] libmachine: (stopped-upgrade-328583) getting domain XML...
	I0414 17:34:56.580258  196026 main.go:141] libmachine: (stopped-upgrade-328583) creating domain...
	I0414 17:34:57.829662  196026 main.go:141] libmachine: (stopped-upgrade-328583) waiting for IP...
	I0414 17:34:57.830795  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:57.831278  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:57.831443  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:57.831313  196078 retry.go:31] will retry after 254.367284ms: waiting for domain to come up
	I0414 17:34:58.087890  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:58.088379  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:58.088458  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:58.088366  196078 retry.go:31] will retry after 387.524544ms: waiting for domain to come up
	I0414 17:34:58.478090  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:58.478595  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:58.478625  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:58.478553  196078 retry.go:31] will retry after 425.878823ms: waiting for domain to come up
	I0414 17:34:58.906079  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:58.906572  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:58.906600  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:58.906529  196078 retry.go:31] will retry after 546.269245ms: waiting for domain to come up
	I0414 17:34:59.454183  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:59.454665  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:59.454688  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:59.454634  196078 retry.go:31] will retry after 727.754381ms: waiting for domain to come up
	I0414 17:35:00.183764  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:35:00.184435  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:35:00.184467  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:35:00.184383  196078 retry.go:31] will retry after 763.380109ms: waiting for domain to come up
	I0414 17:35:00.949192  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:35:00.949732  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:35:00.949761  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:35:00.949698  196078 retry.go:31] will retry after 1.051261488s: waiting for domain to come up
	I0414 17:34:57.786074  195612 pod_ready.go:103] pod "etcd-pause-439119" in "kube-system" namespace has status "Ready":"False"
	I0414 17:34:59.788768  195612 pod_ready.go:103] pod "etcd-pause-439119" in "kube-system" namespace has status "Ready":"False"
	I0414 17:35:01.287146  195612 pod_ready.go:93] pod "etcd-pause-439119" in "kube-system" namespace has status "Ready":"True"
	I0414 17:35:01.287173  195612 pod_ready.go:82] duration metric: took 7.507113484s for pod "etcd-pause-439119" in "kube-system" namespace to be "Ready" ...
	I0414 17:35:01.287186  195612 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-439119" in "kube-system" namespace to be "Ready" ...
	I0414 17:35:00.210928  195953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt ...
	I0414 17:35:00.210949  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt: {Name:mk1a3e86a227e16ac7d389fe6694ecf1cbc99d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.211087  195953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key ...
	I0414 17:35:00.211095  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key: {Name:mk9c4109d7dc18cee89f23d8f3807ca796c29532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0414 17:35:00.211243  195953 out.go:270] ! Certificate apiserver.crt.1713f484 has expired. Generating a new one...
	I0414 17:35:00.211262  195953 certs.go:624] cert expired /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484: expiration: 2025-04-14 17:34:36 +0000 UTC, now: 2025-04-14 17:35:00.211254374 +0000 UTC m=+10.342005193
	I0414 17:35:00.211356  195953 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key.1713f484
	I0414 17:35:00.211371  195953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.83]
	I0414 17:35:00.328463  195953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484 ...
	I0414 17:35:00.328479  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484: {Name:mk8472676c8a0638d6e929a80079b2398ac46873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.328603  195953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key.1713f484 ...
	I0414 17:35:00.328611  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key.1713f484: {Name:mk08cea86fd55a689ee66093943d0fe68f630963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.328662  195953 certs.go:381] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt
	I0414 17:35:00.328788  195953 certs.go:385] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key.1713f484 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key
	W0414 17:35:00.328939  195953 out.go:270] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0414 17:35:00.328956  195953 certs.go:624] cert expired /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt: expiration: 2025-04-14 17:34:36 +0000 UTC, now: 2025-04-14 17:35:00.328951112 +0000 UTC m=+10.459701934
	I0414 17:35:00.329031  195953 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.key
	I0414 17:35:00.329050  195953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt with IP's: []
	I0414 17:35:00.685200  195953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt ...
	I0414 17:35:00.685216  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt: {Name:mka60584e2d24895048518a295e78ab3c4d045ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.685359  195953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.key ...
	I0414 17:35:00.685366  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.key: {Name:mk12d74da66aec6d92f16a26ae33cdd80a77e72d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.685507  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:35:00.685537  195953 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:35:00.685543  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:35:00.685563  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:35:00.685580  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:35:00.685599  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:35:00.685634  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:35:00.686215  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:35:00.720825  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:35:00.788568  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:35:00.879898  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:35:00.978116  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 17:35:01.012809  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 17:35:01.044791  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:35:01.111425  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 17:35:01.156811  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:35:01.189086  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:35:01.214450  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:35:01.248152  195953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:35:01.271355  195953 ssh_runner.go:195] Run: openssl version
	I0414 17:35:01.277145  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:35:01.291005  195953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:35:01.296390  195953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:35:01.296438  195953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:35:01.302540  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:35:01.315624  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:35:01.327753  195953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:35:01.332195  195953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:35:01.332238  195953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:35:01.337934  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:35:01.351483  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:35:01.364312  195953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:35:01.370195  195953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:35:01.370240  195953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:35:01.380830  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:35:01.397576  195953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:35:01.403089  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:35:01.411348  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:35:01.420324  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:35:01.429473  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:35:01.438189  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:35:01.446351  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:35:01.454336  195953 kubeadm.go:392] StartCluster: {Name:cert-expiration-560919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-expir
ation-560919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.83 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:35:01.454413  195953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:35:01.454485  195953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:35:01.505251  195953 cri.go:89] found id: "e2660940daeb42abf2f50c165dfa1073dbb84e53893d9df62831e5c754332025"
	I0414 17:35:01.505266  195953 cri.go:89] found id: "547d7964b06ba356ed04dedb93d72f142fe2185afe28b0650c72d6567139cab0"
	I0414 17:35:01.505270  195953 cri.go:89] found id: "c1d7060f701a1c1cd3e7d1b1d4b75f7c3d0880426b3fdb839c9a33bc0b0408e1"
	I0414 17:35:01.505273  195953 cri.go:89] found id: "4fc0c0cf0d9d9399433827437eb611ef8930c7db8fdc5abd04bd9a7ce9af4786"
	I0414 17:35:01.505276  195953 cri.go:89] found id: "66f47628927cc9771354efffd2014fc06eea59017aae0979dc4e701f5533a621"
	I0414 17:35:01.505279  195953 cri.go:89] found id: "b777bcd82a829e3cfc15f0cfe0d6dfc08f8a48272e8ee45bcc21d9be47d3cf50"
	I0414 17:35:01.505281  195953 cri.go:89] found id: "eb2ed7ff2c22f89bd6bbcdbf3feb547ac90b88803adf8eed00df51703c4b3d9e"
	I0414 17:35:01.505284  195953 cri.go:89] found id: "8d939468aa318dc3076ebed334eca6e8e1849e342451941b76e2dc57966bce87"
	I0414 17:35:01.505286  195953 cri.go:89] found id: "506d8235e69702667463ddddb28304c354cd3afffa4302a3a34d7bad32ddad7c"
	I0414 17:35:01.505294  195953 cri.go:89] found id: "f778abc2cc4ca8dc947b985068084ef3c626abc7a5f7939bfcc844368b63f96a"
	I0414 17:35:01.505297  195953 cri.go:89] found id: "6aa42639422dcd7d038fa6a0dad19d26232a43ac280e369113ba5e678a33b031"
	I0414 17:35:01.505299  195953 cri.go:89] found id: "241d383c162f9c1790ca165d7adf602b97568a444c5db09e3ece67cb4fe821c7"
	I0414 17:35:01.505302  195953 cri.go:89] found id: "23d8785a366019c3b9292722f760812b9f7c6395bdadcab9e04092a57d430bda"
	I0414 17:35:01.505305  195953 cri.go:89] found id: ""
	I0414 17:35:01.505356  195953 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-439119 -n pause-439119
helpers_test.go:261: (dbg) Run:  kubectl --context pause-439119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-439119 -n pause-439119
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-439119 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-439119 logs -n 25: (1.307159155s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo docker                         | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo cat                            | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo                                | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo find                           | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-993774 sudo crio                           | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-993774                                     | cilium-993774             | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC | 14 Apr 25 17:32 UTC |
	| start   | -p kubernetes-upgrade-771697                         | kubernetes-upgrade-771697 | jenkins | v1.35.0 | 14 Apr 25 17:32 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-900958 sudo                          | NoKubernetes-900958       | jenkins | v1.35.0 | 14 Apr 25 17:33 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-900958                               | NoKubernetes-900958       | jenkins | v1.35.0 | 14 Apr 25 17:33 UTC | 14 Apr 25 17:33 UTC |
	| start   | -p stopped-upgrade-328583                            | minikube                  | jenkins | v1.26.0 | 14 Apr 25 17:33 UTC | 14 Apr 25 17:34 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	| start   | -p pause-439119                                      | pause-439119              | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:35 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-328583 stop                          | minikube                  | jenkins | v1.26.0 | 14 Apr 25 17:34 UTC | 14 Apr 25 17:34 UTC |
	| start   | -p cert-expiration-560919                            | cert-expiration-560919    | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-328583                            | stopped-upgrade-328583    | jenkins | v1.35.0 | 14 Apr 25 17:34 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:34:51
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:34:51.472588  196026 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:34:51.472843  196026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:34:51.472853  196026 out.go:358] Setting ErrFile to fd 2...
	I0414 17:34:51.472858  196026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:34:51.473027  196026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:34:51.473538  196026 out.go:352] Setting JSON to false
	I0414 17:34:51.474517  196026 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8189,"bootTime":1744643902,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:34:51.474604  196026 start.go:139] virtualization: kvm guest
	I0414 17:34:51.476439  196026 out.go:177] * [stopped-upgrade-328583] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:34:51.477586  196026 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:34:51.477583  196026 notify.go:220] Checking for updates...
	I0414 17:34:51.479680  196026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:34:51.480726  196026 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:34:51.481794  196026 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:34:51.482889  196026 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:34:51.483981  196026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:34:51.485553  196026 config.go:182] Loaded profile config "stopped-upgrade-328583": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0414 17:34:51.486043  196026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:51.486111  196026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:51.500926  196026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0414 17:34:51.501412  196026 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:51.502037  196026 main.go:141] libmachine: Using API Version  1
	I0414 17:34:51.502058  196026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:51.502387  196026 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:51.502567  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .DriverName
	I0414 17:34:51.504190  196026 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 17:34:51.505346  196026 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:34:51.505615  196026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:51.505663  196026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:51.519778  196026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0414 17:34:51.520141  196026 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:51.520522  196026 main.go:141] libmachine: Using API Version  1
	I0414 17:34:51.520554  196026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:51.520932  196026 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:51.521117  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .DriverName
	I0414 17:34:51.554557  196026 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 17:34:51.555891  196026 start.go:297] selected driver: kvm2
	I0414 17:34:51.555910  196026 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-328583 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-328
583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0414 17:34:51.556025  196026 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:34:51.557006  196026 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:34:51.557112  196026 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:34:51.571540  196026 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:34:51.572040  196026 cni.go:84] Creating CNI manager for ""
	I0414 17:34:51.572115  196026 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:51.572185  196026 start.go:340] cluster config:
	{Name:stopped-upgrade-328583 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-328583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0414 17:34:51.572316  196026 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:34:51.574537  196026 out.go:177] * Starting "stopped-upgrade-328583" primary control-plane node in "stopped-upgrade-328583" cluster
	I0414 17:34:48.017142  195612 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.143431617s)
	I0414 17:34:48.017186  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:34:48.259854  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:34:48.341377  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:34:48.439103  195612 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:34:48.439193  195612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:34:48.939673  195612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:34:49.439906  195612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:34:49.454914  195612 api_server.go:72] duration metric: took 1.015814478s to wait for apiserver process to appear ...
	I0414 17:34:49.454943  195612 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:34:49.454966  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:51.936797  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:34:51.936839  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:34:51.936856  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:52.005230  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:34:52.005266  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:34:52.005285  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:52.011919  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:34:52.011947  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:34:52.455576  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:52.460756  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:34:52.460785  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:34:52.955703  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:52.959592  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:34:52.959616  195612 api_server.go:103] status: https://192.168.50.34:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:34:53.455255  195612 api_server.go:253] Checking apiserver healthz at https://192.168.50.34:8443/healthz ...
	I0414 17:34:53.459187  195612 api_server.go:279] https://192.168.50.34:8443/healthz returned 200:
	ok
	I0414 17:34:53.465246  195612 api_server.go:141] control plane version: v1.32.2
	I0414 17:34:53.465269  195612 api_server.go:131] duration metric: took 4.01031956s to wait for apiserver health ...
	I0414 17:34:53.465278  195612 cni.go:84] Creating CNI manager for ""
	I0414 17:34:53.465284  195612 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:53.466854  195612 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:34:50.060159  195953 machine.go:93] provisionDockerMachine start ...
	I0414 17:34:50.060173  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:50.060360  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.063288  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.063772  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.063791  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.063977  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.064171  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.064303  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.064413  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.064542  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:50.064842  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:50.064849  195953 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:34:50.188772  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-560919
	
	I0414 17:34:50.188809  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetMachineName
	I0414 17:34:50.189049  195953 buildroot.go:166] provisioning hostname "cert-expiration-560919"
	I0414 17:34:50.189067  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetMachineName
	I0414 17:34:50.189286  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.192828  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.193299  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.193315  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.193562  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.193743  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.193960  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.194147  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.194403  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:50.194741  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:50.194756  195953 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-560919 && echo "cert-expiration-560919" | sudo tee /etc/hostname
	I0414 17:34:50.331018  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-560919
	
	I0414 17:34:50.331038  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.334233  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.334631  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.334650  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.334896  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.335067  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.335194  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.335335  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.335487  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:50.335744  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:50.335758  195953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-560919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-560919/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-560919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:34:50.459223  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:34:50.459257  195953 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:34:50.459275  195953 buildroot.go:174] setting up certificates
	I0414 17:34:50.459285  195953 provision.go:84] configureAuth start
	I0414 17:34:50.459296  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetMachineName
	I0414 17:34:50.459551  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetIP
	I0414 17:34:50.461958  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.462270  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.462297  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.462464  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.464676  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.465031  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.465051  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.465150  195953 provision.go:143] copyHostCerts
	I0414 17:34:50.465199  195953 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:34:50.465214  195953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:34:50.465280  195953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:34:50.465361  195953 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:34:50.465365  195953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:34:50.465386  195953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:34:50.465430  195953 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:34:50.465433  195953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:34:50.465446  195953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:34:50.465482  195953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-560919 san=[127.0.0.1 192.168.72.83 cert-expiration-560919 localhost minikube]
	I0414 17:34:50.596868  195953 provision.go:177] copyRemoteCerts
	I0414 17:34:50.596926  195953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:34:50.596946  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.599588  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.599869  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.599890  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.600033  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.600195  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.600329  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.600415  195953 sshutil.go:53] new ssh client: &{IP:192.168.72.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/cert-expiration-560919/id_rsa Username:docker}
	I0414 17:34:50.686325  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:34:50.710764  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 17:34:50.738833  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 17:34:50.769926  195953 provision.go:87] duration metric: took 310.626309ms to configureAuth
	I0414 17:34:50.769946  195953 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:34:50.770154  195953 config.go:182] Loaded profile config "cert-expiration-560919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:34:50.770224  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:50.775628  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.776132  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:50.776164  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:50.776336  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:50.776495  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.776654  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:50.776778  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:50.776945  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:50.777208  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:50.777221  195953 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:34:54.586250  194818 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:34:54.586479  194818 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:34:51.575471  196026 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0414 17:34:51.575511  196026 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0414 17:34:51.575530  196026 cache.go:56] Caching tarball of preloaded images
	I0414 17:34:51.575620  196026 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:34:51.575634  196026 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0414 17:34:51.575737  196026 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/stopped-upgrade-328583/config.json ...
	I0414 17:34:51.575964  196026 start.go:360] acquireMachinesLock for stopped-upgrade-328583: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:34:56.554219  196026 start.go:364] duration metric: took 4.978206565s to acquireMachinesLock for "stopped-upgrade-328583"
	I0414 17:34:56.554281  196026 start.go:96] Skipping create...Using existing machine configuration
	I0414 17:34:56.554289  196026 fix.go:54] fixHost starting: 
	I0414 17:34:56.554690  196026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:34:56.554746  196026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:34:56.571831  196026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0414 17:34:56.572320  196026 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:34:56.572766  196026 main.go:141] libmachine: Using API Version  1
	I0414 17:34:56.572790  196026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:34:56.573123  196026 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:34:56.573311  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .DriverName
	I0414 17:34:56.573460  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .GetState
	I0414 17:34:56.574838  196026 fix.go:112] recreateIfNeeded on stopped-upgrade-328583: state=Stopped err=<nil>
	I0414 17:34:56.574878  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .DriverName
	W0414 17:34:56.575026  196026 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 17:34:56.576736  196026 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-328583" ...
	I0414 17:34:53.468031  195612 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:34:53.478826  195612 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:34:53.495976  195612 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:34:53.499491  195612 system_pods.go:59] 6 kube-system pods found
	I0414 17:34:53.499522  195612 system_pods.go:61] "coredns-668d6bf9bc-xszpz" [af69601c-aaa4-4616-b17a-b7ffdeace7db] Running
	I0414 17:34:53.499532  195612 system_pods.go:61] "etcd-pause-439119" [4552ba2f-bcd9-4812-8f56-073d4303a1fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 17:34:53.499538  195612 system_pods.go:61] "kube-apiserver-pause-439119" [e83888e8-d458-4032-b9cb-3b5e58ad38e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 17:34:53.499545  195612 system_pods.go:61] "kube-controller-manager-pause-439119" [111c0e4d-4adf-4af4-b849-11a37fb2a9f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 17:34:53.499549  195612 system_pods.go:61] "kube-proxy-n9vxg" [6e5c41ec-d5b4-4578-9ad2-7e24118ebe43] Running
	I0414 17:34:53.499553  195612 system_pods.go:61] "kube-scheduler-pause-439119" [b9f8e14a-a971-44ae-bcc8-662699aaf178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 17:34:53.499563  195612 system_pods.go:74] duration metric: took 3.570183ms to wait for pod list to return data ...
	I0414 17:34:53.499574  195612 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:34:53.501785  195612 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:34:53.501823  195612 node_conditions.go:123] node cpu capacity is 2
	I0414 17:34:53.501856  195612 node_conditions.go:105] duration metric: took 2.273422ms to run NodePressure ...
	I0414 17:34:53.501875  195612 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:34:53.770368  195612 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 17:34:53.773990  195612 kubeadm.go:739] kubelet initialised
	I0414 17:34:53.774009  195612 kubeadm.go:740] duration metric: took 3.614795ms waiting for restarted kubelet to initialise ...
	I0414 17:34:53.774016  195612 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:34:53.776355  195612 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-xszpz" in "kube-system" namespace to be "Ready" ...
	I0414 17:34:53.780023  195612 pod_ready.go:93] pod "coredns-668d6bf9bc-xszpz" in "kube-system" namespace has status "Ready":"True"
	I0414 17:34:53.780041  195612 pod_ready.go:82] duration metric: took 3.660443ms for pod "coredns-668d6bf9bc-xszpz" in "kube-system" namespace to be "Ready" ...
	I0414 17:34:53.780051  195612 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-439119" in "kube-system" namespace to be "Ready" ...
	I0414 17:34:55.785355  195612 pod_ready.go:103] pod "etcd-pause-439119" in "kube-system" namespace has status "Ready":"False"
	I0414 17:34:56.326755  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:34:56.326772  195953 machine.go:96] duration metric: took 6.26660498s to provisionDockerMachine
	I0414 17:34:56.326784  195953 start.go:293] postStartSetup for "cert-expiration-560919" (driver="kvm2")
	I0414 17:34:56.326801  195953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:34:56.326829  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.327136  195953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:34:56.327176  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:56.329907  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.330282  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.330306  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.330438  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:56.330619  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.330750  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:56.330857  195953 sshutil.go:53] new ssh client: &{IP:192.168.72.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/cert-expiration-560919/id_rsa Username:docker}
	I0414 17:34:56.411687  195953 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:34:56.415974  195953 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:34:56.415984  195953 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:34:56.416039  195953 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:34:56.416122  195953 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:34:56.416217  195953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:34:56.425202  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:34:56.449143  195953 start.go:296] duration metric: took 122.343306ms for postStartSetup
	I0414 17:34:56.449164  195953 fix.go:56] duration metric: took 6.410510483s for fixHost
	I0414 17:34:56.449184  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:56.451865  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.452205  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.452226  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.452386  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:56.452565  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.452713  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.452824  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:56.452958  195953 main.go:141] libmachine: Using SSH client type: native
	I0414 17:34:56.453153  195953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.83 22 <nil> <nil>}
	I0414 17:34:56.453157  195953 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:34:56.554093  195953 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652096.531510972
	
	I0414 17:34:56.554105  195953 fix.go:216] guest clock: 1744652096.531510972
	I0414 17:34:56.554112  195953 fix.go:229] Guest: 2025-04-14 17:34:56.531510972 +0000 UTC Remote: 2025-04-14 17:34:56.449166475 +0000 UTC m=+6.579917298 (delta=82.344497ms)
	I0414 17:34:56.554133  195953 fix.go:200] guest clock delta is within tolerance: 82.344497ms
	I0414 17:34:56.554138  195953 start.go:83] releasing machines lock for "cert-expiration-560919", held for 6.515492494s
	I0414 17:34:56.554163  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.554363  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetIP
	I0414 17:34:56.557089  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.557485  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.557505  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.557625  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.558123  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.558279  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .DriverName
	I0414 17:34:56.558364  195953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:34:56.558398  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:56.558497  195953 ssh_runner.go:195] Run: cat /version.json
	I0414 17:34:56.558518  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHHostname
	I0414 17:34:56.560997  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.561363  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.561383  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.561406  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.561658  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:56.561844  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.561906  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:56.561926  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:56.561965  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:56.562042  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHPort
	I0414 17:34:56.562091  195953 sshutil.go:53] new ssh client: &{IP:192.168.72.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/cert-expiration-560919/id_rsa Username:docker}
	I0414 17:34:56.562153  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHKeyPath
	I0414 17:34:56.562261  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetSSHUsername
	I0414 17:34:56.562366  195953 sshutil.go:53] new ssh client: &{IP:192.168.72.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/cert-expiration-560919/id_rsa Username:docker}
	I0414 17:34:56.638553  195953 ssh_runner.go:195] Run: systemctl --version
	I0414 17:34:56.660455  195953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:34:56.819730  195953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:34:56.828626  195953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:34:56.828696  195953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:34:56.841453  195953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0414 17:34:56.841467  195953 start.go:495] detecting cgroup driver to use...
	I0414 17:34:56.841536  195953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:34:56.861035  195953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:34:56.877878  195953 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:34:56.877921  195953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:34:56.891379  195953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:34:56.905729  195953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:34:57.064738  195953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:34:57.219177  195953 docker.go:233] disabling docker service ...
	I0414 17:34:57.219223  195953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:34:57.239267  195953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:34:57.253972  195953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:34:57.405702  195953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:34:57.555224  195953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:34:57.570019  195953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:34:57.588825  195953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 17:34:57.588876  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.599673  195953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:34:57.599722  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.610348  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.620806  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.631961  195953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:34:57.642978  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.653515  195953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.663999  195953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:34:57.674251  195953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:34:57.683834  195953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:34:57.694332  195953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:34:57.834285  195953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:34:58.932396  195953 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.098088682s)
	I0414 17:34:58.932422  195953 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:34:58.932481  195953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:34:58.937896  195953 start.go:563] Will wait 60s for crictl version
	I0414 17:34:58.937954  195953 ssh_runner.go:195] Run: which crictl
	I0414 17:34:58.941994  195953 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:34:58.984684  195953 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:34:58.984769  195953 ssh_runner.go:195] Run: crio --version
	I0414 17:34:59.013287  195953 ssh_runner.go:195] Run: crio --version
	I0414 17:34:59.045528  195953 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 17:34:59.046637  195953 main.go:141] libmachine: (cert-expiration-560919) Calling .GetIP
	I0414 17:34:59.049720  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:59.050137  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:77:7a", ip: ""} in network mk-cert-expiration-560919: {Iface:virbr4 ExpiryTime:2025-04-14 18:31:23 +0000 UTC Type:0 Mac:52:54:00:e8:77:7a Iaid: IPaddr:192.168.72.83 Prefix:24 Hostname:cert-expiration-560919 Clientid:01:52:54:00:e8:77:7a}
	I0414 17:34:59.050160  195953 main.go:141] libmachine: (cert-expiration-560919) DBG | domain cert-expiration-560919 has defined IP address 192.168.72.83 and MAC address 52:54:00:e8:77:7a in network mk-cert-expiration-560919
	I0414 17:34:59.050385  195953 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 17:34:59.054822  195953 kubeadm.go:883] updating cluster {Name:cert-expiration-560919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-ex
piration-560919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.83 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:34:59.054930  195953 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:34:59.054976  195953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:34:59.103706  195953 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:34:59.103717  195953 crio.go:433] Images already preloaded, skipping extraction
	I0414 17:34:59.103776  195953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:34:59.139082  195953 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:34:59.139094  195953 cache_images.go:84] Images are preloaded, skipping loading
	I0414 17:34:59.139099  195953 kubeadm.go:934] updating node { 192.168.72.83 8443 v1.32.2 crio true true} ...
	I0414 17:34:59.139183  195953 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-560919 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:cert-expiration-560919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:34:59.139241  195953 ssh_runner.go:195] Run: crio config
	I0414 17:34:59.190219  195953 cni.go:84] Creating CNI manager for ""
	I0414 17:34:59.190230  195953 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:34:59.190241  195953 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:34:59.190258  195953 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.83 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-560919 NodeName:cert-expiration-560919 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 17:34:59.190368  195953 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-560919"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.83"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.83"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:34:59.190418  195953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 17:34:59.201886  195953 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:34:59.201950  195953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:34:59.212873  195953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0414 17:34:59.229759  195953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:34:59.247390  195953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I0414 17:34:59.264986  195953 ssh_runner.go:195] Run: grep 192.168.72.83	control-plane.minikube.internal$ /etc/hosts
	I0414 17:34:59.269112  195953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:34:59.412313  195953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:34:59.429036  195953 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919 for IP: 192.168.72.83
	I0414 17:34:59.429054  195953 certs.go:194] generating shared ca certs ...
	I0414 17:34:59.429085  195953 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:34:59.429356  195953 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:34:59.429419  195953 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:34:59.429427  195953 certs.go:256] generating profile certs ...
	W0414 17:34:59.429583  195953 out.go:270] ! Certificate client.crt has expired. Generating a new one...
	I0414 17:34:59.429621  195953 certs.go:624] cert expired /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt: expiration: 2025-04-14 17:34:36 +0000 UTC, now: 2025-04-14 17:34:59.429604714 +0000 UTC m=+9.560355557
	I0414 17:34:59.429804  195953 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key
	I0414 17:34:59.429854  195953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt with IP's: []
	I0414 17:34:56.577902  196026 main.go:141] libmachine: (stopped-upgrade-328583) Calling .Start
	I0414 17:34:56.578059  196026 main.go:141] libmachine: (stopped-upgrade-328583) starting domain...
	I0414 17:34:56.578076  196026 main.go:141] libmachine: (stopped-upgrade-328583) ensuring networks are active...
	I0414 17:34:56.578817  196026 main.go:141] libmachine: (stopped-upgrade-328583) Ensuring network default is active
	I0414 17:34:56.579157  196026 main.go:141] libmachine: (stopped-upgrade-328583) Ensuring network mk-stopped-upgrade-328583 is active
	I0414 17:34:56.579546  196026 main.go:141] libmachine: (stopped-upgrade-328583) getting domain XML...
	I0414 17:34:56.580258  196026 main.go:141] libmachine: (stopped-upgrade-328583) creating domain...
	I0414 17:34:57.829662  196026 main.go:141] libmachine: (stopped-upgrade-328583) waiting for IP...
	I0414 17:34:57.830795  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:57.831278  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:57.831443  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:57.831313  196078 retry.go:31] will retry after 254.367284ms: waiting for domain to come up
	I0414 17:34:58.087890  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:58.088379  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:58.088458  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:58.088366  196078 retry.go:31] will retry after 387.524544ms: waiting for domain to come up
	I0414 17:34:58.478090  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:58.478595  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:58.478625  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:58.478553  196078 retry.go:31] will retry after 425.878823ms: waiting for domain to come up
	I0414 17:34:58.906079  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:58.906572  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:58.906600  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:58.906529  196078 retry.go:31] will retry after 546.269245ms: waiting for domain to come up
	I0414 17:34:59.454183  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:34:59.454665  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:34:59.454688  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:34:59.454634  196078 retry.go:31] will retry after 727.754381ms: waiting for domain to come up
	I0414 17:35:00.183764  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:35:00.184435  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:35:00.184467  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:35:00.184383  196078 retry.go:31] will retry after 763.380109ms: waiting for domain to come up
	I0414 17:35:00.949192  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | domain stopped-upgrade-328583 has defined MAC address 52:54:00:82:fa:7d in network mk-stopped-upgrade-328583
	I0414 17:35:00.949732  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | unable to find current IP address of domain stopped-upgrade-328583 in network mk-stopped-upgrade-328583
	I0414 17:35:00.949761  196026 main.go:141] libmachine: (stopped-upgrade-328583) DBG | I0414 17:35:00.949698  196078 retry.go:31] will retry after 1.051261488s: waiting for domain to come up
	I0414 17:34:57.786074  195612 pod_ready.go:103] pod "etcd-pause-439119" in "kube-system" namespace has status "Ready":"False"
	I0414 17:34:59.788768  195612 pod_ready.go:103] pod "etcd-pause-439119" in "kube-system" namespace has status "Ready":"False"
	I0414 17:35:01.287146  195612 pod_ready.go:93] pod "etcd-pause-439119" in "kube-system" namespace has status "Ready":"True"
	I0414 17:35:01.287173  195612 pod_ready.go:82] duration metric: took 7.507113484s for pod "etcd-pause-439119" in "kube-system" namespace to be "Ready" ...
	I0414 17:35:01.287186  195612 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-439119" in "kube-system" namespace to be "Ready" ...
	I0414 17:35:00.210928  195953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt ...
	I0414 17:35:00.210949  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt: {Name:mk1a3e86a227e16ac7d389fe6694ecf1cbc99d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.211087  195953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key ...
	I0414 17:35:00.211095  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key: {Name:mk9c4109d7dc18cee89f23d8f3807ca796c29532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0414 17:35:00.211243  195953 out.go:270] ! Certificate apiserver.crt.1713f484 has expired. Generating a new one...
	I0414 17:35:00.211262  195953 certs.go:624] cert expired /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484: expiration: 2025-04-14 17:34:36 +0000 UTC, now: 2025-04-14 17:35:00.211254374 +0000 UTC m=+10.342005193
	I0414 17:35:00.211356  195953 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key.1713f484
	I0414 17:35:00.211371  195953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.83]
	I0414 17:35:00.328463  195953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484 ...
	I0414 17:35:00.328479  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484: {Name:mk8472676c8a0638d6e929a80079b2398ac46873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.328603  195953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key.1713f484 ...
	I0414 17:35:00.328611  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key.1713f484: {Name:mk08cea86fd55a689ee66093943d0fe68f630963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.328662  195953 certs.go:381] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt.1713f484 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt
	I0414 17:35:00.328788  195953 certs.go:385] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key.1713f484 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key
	W0414 17:35:00.328939  195953 out.go:270] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0414 17:35:00.328956  195953 certs.go:624] cert expired /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt: expiration: 2025-04-14 17:34:36 +0000 UTC, now: 2025-04-14 17:35:00.328951112 +0000 UTC m=+10.459701934
	I0414 17:35:00.329031  195953 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.key
	I0414 17:35:00.329050  195953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt with IP's: []
	I0414 17:35:00.685200  195953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt ...
	I0414 17:35:00.685216  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt: {Name:mka60584e2d24895048518a295e78ab3c4d045ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.685359  195953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.key ...
	I0414 17:35:00.685366  195953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.key: {Name:mk12d74da66aec6d92f16a26ae33cdd80a77e72d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:35:00.685507  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:35:00.685537  195953 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:35:00.685543  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:35:00.685563  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:35:00.685580  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:35:00.685599  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:35:00.685634  195953 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:35:00.686215  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:35:00.720825  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:35:00.788568  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:35:00.879898  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:35:00.978116  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 17:35:01.012809  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 17:35:01.044791  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:35:01.111425  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 17:35:01.156811  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:35:01.189086  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:35:01.214450  195953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:35:01.248152  195953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:35:01.271355  195953 ssh_runner.go:195] Run: openssl version
	I0414 17:35:01.277145  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:35:01.291005  195953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:35:01.296390  195953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:35:01.296438  195953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:35:01.302540  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:35:01.315624  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:35:01.327753  195953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:35:01.332195  195953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:35:01.332238  195953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:35:01.337934  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:35:01.351483  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:35:01.364312  195953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:35:01.370195  195953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:35:01.370240  195953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:35:01.380830  195953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:35:01.397576  195953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:35:01.403089  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:35:01.411348  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:35:01.420324  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:35:01.429473  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:35:01.438189  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:35:01.446351  195953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:35:01.454336  195953 kubeadm.go:392] StartCluster: {Name:cert-expiration-560919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-expir
ation-560919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.83 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:35:01.454413  195953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:35:01.454485  195953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:35:01.505251  195953 cri.go:89] found id: "e2660940daeb42abf2f50c165dfa1073dbb84e53893d9df62831e5c754332025"
	I0414 17:35:01.505266  195953 cri.go:89] found id: "547d7964b06ba356ed04dedb93d72f142fe2185afe28b0650c72d6567139cab0"
	I0414 17:35:01.505270  195953 cri.go:89] found id: "c1d7060f701a1c1cd3e7d1b1d4b75f7c3d0880426b3fdb839c9a33bc0b0408e1"
	I0414 17:35:01.505273  195953 cri.go:89] found id: "4fc0c0cf0d9d9399433827437eb611ef8930c7db8fdc5abd04bd9a7ce9af4786"
	I0414 17:35:01.505276  195953 cri.go:89] found id: "66f47628927cc9771354efffd2014fc06eea59017aae0979dc4e701f5533a621"
	I0414 17:35:01.505279  195953 cri.go:89] found id: "b777bcd82a829e3cfc15f0cfe0d6dfc08f8a48272e8ee45bcc21d9be47d3cf50"
	I0414 17:35:01.505281  195953 cri.go:89] found id: "eb2ed7ff2c22f89bd6bbcdbf3feb547ac90b88803adf8eed00df51703c4b3d9e"
	I0414 17:35:01.505284  195953 cri.go:89] found id: "8d939468aa318dc3076ebed334eca6e8e1849e342451941b76e2dc57966bce87"
	I0414 17:35:01.505286  195953 cri.go:89] found id: "506d8235e69702667463ddddb28304c354cd3afffa4302a3a34d7bad32ddad7c"
	I0414 17:35:01.505294  195953 cri.go:89] found id: "f778abc2cc4ca8dc947b985068084ef3c626abc7a5f7939bfcc844368b63f96a"
	I0414 17:35:01.505297  195953 cri.go:89] found id: "6aa42639422dcd7d038fa6a0dad19d26232a43ac280e369113ba5e678a33b031"
	I0414 17:35:01.505299  195953 cri.go:89] found id: "241d383c162f9c1790ca165d7adf602b97568a444c5db09e3ece67cb4fe821c7"
	I0414 17:35:01.505302  195953 cri.go:89] found id: "23d8785a366019c3b9292722f760812b9f7c6395bdadcab9e04092a57d430bda"
	I0414 17:35:01.505305  195953 cri.go:89] found id: ""
	I0414 17:35:01.505356  195953 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-439119 -n pause-439119
helpers_test.go:261: (dbg) Run:  kubectl --context pause-439119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (277.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-768580 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-768580 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m37.260070494s)

                                                
                                                
-- stdout --
	* [old-k8s-version-768580] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-768580" primary control-plane node in "old-k8s-version-768580" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:39:13.727038  206309 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:39:13.727306  206309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:39:13.727316  206309 out.go:358] Setting ErrFile to fd 2...
	I0414 17:39:13.727320  206309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:39:13.727509  206309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:39:13.728107  206309 out.go:352] Setting JSON to false
	I0414 17:39:13.729280  206309 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8452,"bootTime":1744643902,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:39:13.729366  206309 start.go:139] virtualization: kvm guest
	I0414 17:39:13.731275  206309 out.go:177] * [old-k8s-version-768580] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:39:13.732780  206309 notify.go:220] Checking for updates...
	I0414 17:39:13.732802  206309 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:39:13.734006  206309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:39:13.735225  206309 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:39:13.736340  206309 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:39:13.737398  206309 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:39:13.738348  206309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:39:13.739785  206309 config.go:182] Loaded profile config "bridge-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:39:13.739900  206309 config.go:182] Loaded profile config "flannel-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:39:13.740009  206309 config.go:182] Loaded profile config "kubernetes-upgrade-771697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:39:13.740126  206309 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:39:13.776579  206309 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 17:39:13.777741  206309 start.go:297] selected driver: kvm2
	I0414 17:39:13.777756  206309 start.go:901] validating driver "kvm2" against <nil>
	I0414 17:39:13.777772  206309 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:39:13.778844  206309 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:13.778957  206309 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:39:13.795824  206309 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:39:13.795883  206309 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 17:39:13.796208  206309 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:39:13.796270  206309 cni.go:84] Creating CNI manager for ""
	I0414 17:39:13.796330  206309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:39:13.796344  206309 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 17:39:13.796395  206309 start.go:340] cluster config:
	{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:39:13.796497  206309 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:39:13.798269  206309 out.go:177] * Starting "old-k8s-version-768580" primary control-plane node in "old-k8s-version-768580" cluster
	I0414 17:39:13.799539  206309 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:39:13.799590  206309 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 17:39:13.799602  206309 cache.go:56] Caching tarball of preloaded images
	I0414 17:39:13.799705  206309 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:39:13.799719  206309 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 17:39:13.799834  206309 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:39:13.799861  206309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json: {Name:mkbd5c16f8c4bb93f665982933917ca0fa3fab53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:13.800031  206309 start.go:360] acquireMachinesLock for old-k8s-version-768580: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:39:17.976197  206309 start.go:364] duration metric: took 4.176121608s to acquireMachinesLock for "old-k8s-version-768580"
	I0414 17:39:17.976257  206309 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:39:17.976365  206309 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 17:39:17.978181  206309 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 17:39:17.978406  206309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:39:17.978466  206309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:39:17.995481  206309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
	I0414 17:39:17.995988  206309 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:39:17.996520  206309 main.go:141] libmachine: Using API Version  1
	I0414 17:39:17.996544  206309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:39:17.996948  206309 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:39:17.997171  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:39:17.997341  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:17.997469  206309 start.go:159] libmachine.API.Create for "old-k8s-version-768580" (driver="kvm2")
	I0414 17:39:17.997507  206309 client.go:168] LocalClient.Create starting
	I0414 17:39:17.997543  206309 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem
	I0414 17:39:17.997595  206309 main.go:141] libmachine: Decoding PEM data...
	I0414 17:39:17.997618  206309 main.go:141] libmachine: Parsing certificate...
	I0414 17:39:17.997696  206309 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem
	I0414 17:39:17.997730  206309 main.go:141] libmachine: Decoding PEM data...
	I0414 17:39:17.997748  206309 main.go:141] libmachine: Parsing certificate...
	I0414 17:39:17.997776  206309 main.go:141] libmachine: Running pre-create checks...
	I0414 17:39:17.997789  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .PreCreateCheck
	I0414 17:39:17.998162  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:39:17.998542  206309 main.go:141] libmachine: Creating machine...
	I0414 17:39:17.998565  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .Create
	I0414 17:39:17.998724  206309 main.go:141] libmachine: (old-k8s-version-768580) creating KVM machine...
	I0414 17:39:17.998739  206309 main.go:141] libmachine: (old-k8s-version-768580) creating network...
	I0414 17:39:17.999943  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found existing default KVM network
	I0414 17:39:18.000976  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:18.000821  206405 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d3:e2:76} reservation:<nil>}
	I0414 17:39:18.001893  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:18.001797  206405 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:76:cc:42} reservation:<nil>}
	I0414 17:39:18.002534  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:18.002463  206405 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:1e:4e:88} reservation:<nil>}
	I0414 17:39:18.003677  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:18.003609  206405 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000305260}
	I0414 17:39:18.003757  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | created network xml: 
	I0414 17:39:18.003772  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | <network>
	I0414 17:39:18.003783  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |   <name>mk-old-k8s-version-768580</name>
	I0414 17:39:18.003791  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |   <dns enable='no'/>
	I0414 17:39:18.003799  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |   
	I0414 17:39:18.003817  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0414 17:39:18.003827  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |     <dhcp>
	I0414 17:39:18.003836  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0414 17:39:18.003844  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |     </dhcp>
	I0414 17:39:18.003850  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |   </ip>
	I0414 17:39:18.003858  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG |   
	I0414 17:39:18.003872  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | </network>
	I0414 17:39:18.003882  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | 
	I0414 17:39:18.009101  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | trying to create private KVM network mk-old-k8s-version-768580 192.168.72.0/24...
	I0414 17:39:18.101663  206309 main.go:141] libmachine: (old-k8s-version-768580) setting up store path in /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580 ...
	I0414 17:39:18.101688  206309 main.go:141] libmachine: (old-k8s-version-768580) building disk image from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 17:39:18.101706  206309 main.go:141] libmachine: (old-k8s-version-768580) Downloading /home/jenkins/minikube-integration/20349-149500/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 17:39:18.101721  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | private KVM network mk-old-k8s-version-768580 192.168.72.0/24 created
	I0414 17:39:18.101743  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:18.101415  206405 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:39:18.416467  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:18.416292  206405 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa...
	I0414 17:39:18.782516  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:18.782366  206405 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/old-k8s-version-768580.rawdisk...
	I0414 17:39:18.782550  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | Writing magic tar header
	I0414 17:39:18.782568  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | Writing SSH key tar header
	I0414 17:39:18.782582  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:18.782521  206405 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580 ...
	I0414 17:39:18.782680  206309 main.go:141] libmachine: (old-k8s-version-768580) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580 (perms=drwx------)
	I0414 17:39:18.782695  206309 main.go:141] libmachine: (old-k8s-version-768580) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines (perms=drwxr-xr-x)
	I0414 17:39:18.782709  206309 main.go:141] libmachine: (old-k8s-version-768580) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube (perms=drwxr-xr-x)
	I0414 17:39:18.782729  206309 main.go:141] libmachine: (old-k8s-version-768580) setting executable bit set on /home/jenkins/minikube-integration/20349-149500 (perms=drwxrwxr-x)
	I0414 17:39:18.782739  206309 main.go:141] libmachine: (old-k8s-version-768580) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 17:39:18.782748  206309 main.go:141] libmachine: (old-k8s-version-768580) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 17:39:18.782759  206309 main.go:141] libmachine: (old-k8s-version-768580) creating domain...
	I0414 17:39:18.782773  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580
	I0414 17:39:18.782785  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines
	I0414 17:39:18.782795  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:39:18.782803  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500
	I0414 17:39:18.782813  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 17:39:18.782825  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | checking permissions on dir: /home/jenkins
	I0414 17:39:18.782836  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | checking permissions on dir: /home
	I0414 17:39:18.782854  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | skipping /home - not owner
	I0414 17:39:18.784188  206309 main.go:141] libmachine: (old-k8s-version-768580) define libvirt domain using xml: 
	I0414 17:39:18.784215  206309 main.go:141] libmachine: (old-k8s-version-768580) <domain type='kvm'>
	I0414 17:39:18.784226  206309 main.go:141] libmachine: (old-k8s-version-768580)   <name>old-k8s-version-768580</name>
	I0414 17:39:18.784234  206309 main.go:141] libmachine: (old-k8s-version-768580)   <memory unit='MiB'>2200</memory>
	I0414 17:39:18.784243  206309 main.go:141] libmachine: (old-k8s-version-768580)   <vcpu>2</vcpu>
	I0414 17:39:18.784261  206309 main.go:141] libmachine: (old-k8s-version-768580)   <features>
	I0414 17:39:18.784269  206309 main.go:141] libmachine: (old-k8s-version-768580)     <acpi/>
	I0414 17:39:18.784275  206309 main.go:141] libmachine: (old-k8s-version-768580)     <apic/>
	I0414 17:39:18.784283  206309 main.go:141] libmachine: (old-k8s-version-768580)     <pae/>
	I0414 17:39:18.784289  206309 main.go:141] libmachine: (old-k8s-version-768580)     
	I0414 17:39:18.784297  206309 main.go:141] libmachine: (old-k8s-version-768580)   </features>
	I0414 17:39:18.784304  206309 main.go:141] libmachine: (old-k8s-version-768580)   <cpu mode='host-passthrough'>
	I0414 17:39:18.784312  206309 main.go:141] libmachine: (old-k8s-version-768580)   
	I0414 17:39:18.784317  206309 main.go:141] libmachine: (old-k8s-version-768580)   </cpu>
	I0414 17:39:18.784333  206309 main.go:141] libmachine: (old-k8s-version-768580)   <os>
	I0414 17:39:18.784340  206309 main.go:141] libmachine: (old-k8s-version-768580)     <type>hvm</type>
	I0414 17:39:18.784349  206309 main.go:141] libmachine: (old-k8s-version-768580)     <boot dev='cdrom'/>
	I0414 17:39:18.784354  206309 main.go:141] libmachine: (old-k8s-version-768580)     <boot dev='hd'/>
	I0414 17:39:18.784362  206309 main.go:141] libmachine: (old-k8s-version-768580)     <bootmenu enable='no'/>
	I0414 17:39:18.784368  206309 main.go:141] libmachine: (old-k8s-version-768580)   </os>
	I0414 17:39:18.784376  206309 main.go:141] libmachine: (old-k8s-version-768580)   <devices>
	I0414 17:39:18.784383  206309 main.go:141] libmachine: (old-k8s-version-768580)     <disk type='file' device='cdrom'>
	I0414 17:39:18.784396  206309 main.go:141] libmachine: (old-k8s-version-768580)       <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/boot2docker.iso'/>
	I0414 17:39:18.784403  206309 main.go:141] libmachine: (old-k8s-version-768580)       <target dev='hdc' bus='scsi'/>
	I0414 17:39:18.784411  206309 main.go:141] libmachine: (old-k8s-version-768580)       <readonly/>
	I0414 17:39:18.784418  206309 main.go:141] libmachine: (old-k8s-version-768580)     </disk>
	I0414 17:39:18.784427  206309 main.go:141] libmachine: (old-k8s-version-768580)     <disk type='file' device='disk'>
	I0414 17:39:18.784435  206309 main.go:141] libmachine: (old-k8s-version-768580)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 17:39:18.784449  206309 main.go:141] libmachine: (old-k8s-version-768580)       <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/old-k8s-version-768580.rawdisk'/>
	I0414 17:39:18.784457  206309 main.go:141] libmachine: (old-k8s-version-768580)       <target dev='hda' bus='virtio'/>
	I0414 17:39:18.784465  206309 main.go:141] libmachine: (old-k8s-version-768580)     </disk>
	I0414 17:39:18.784471  206309 main.go:141] libmachine: (old-k8s-version-768580)     <interface type='network'>
	I0414 17:39:18.784479  206309 main.go:141] libmachine: (old-k8s-version-768580)       <source network='mk-old-k8s-version-768580'/>
	I0414 17:39:18.784485  206309 main.go:141] libmachine: (old-k8s-version-768580)       <model type='virtio'/>
	I0414 17:39:18.784493  206309 main.go:141] libmachine: (old-k8s-version-768580)     </interface>
	I0414 17:39:18.784500  206309 main.go:141] libmachine: (old-k8s-version-768580)     <interface type='network'>
	I0414 17:39:18.784509  206309 main.go:141] libmachine: (old-k8s-version-768580)       <source network='default'/>
	I0414 17:39:18.784515  206309 main.go:141] libmachine: (old-k8s-version-768580)       <model type='virtio'/>
	I0414 17:39:18.784525  206309 main.go:141] libmachine: (old-k8s-version-768580)     </interface>
	I0414 17:39:18.784532  206309 main.go:141] libmachine: (old-k8s-version-768580)     <serial type='pty'>
	I0414 17:39:18.784540  206309 main.go:141] libmachine: (old-k8s-version-768580)       <target port='0'/>
	I0414 17:39:18.784546  206309 main.go:141] libmachine: (old-k8s-version-768580)     </serial>
	I0414 17:39:18.784555  206309 main.go:141] libmachine: (old-k8s-version-768580)     <console type='pty'>
	I0414 17:39:18.784562  206309 main.go:141] libmachine: (old-k8s-version-768580)       <target type='serial' port='0'/>
	I0414 17:39:18.784569  206309 main.go:141] libmachine: (old-k8s-version-768580)     </console>
	I0414 17:39:18.784576  206309 main.go:141] libmachine: (old-k8s-version-768580)     <rng model='virtio'>
	I0414 17:39:18.784585  206309 main.go:141] libmachine: (old-k8s-version-768580)       <backend model='random'>/dev/random</backend>
	I0414 17:39:18.784598  206309 main.go:141] libmachine: (old-k8s-version-768580)     </rng>
	I0414 17:39:18.784606  206309 main.go:141] libmachine: (old-k8s-version-768580)     
	I0414 17:39:18.784611  206309 main.go:141] libmachine: (old-k8s-version-768580)     
	I0414 17:39:18.784619  206309 main.go:141] libmachine: (old-k8s-version-768580)   </devices>
	I0414 17:39:18.784626  206309 main.go:141] libmachine: (old-k8s-version-768580) </domain>
	I0414 17:39:18.784635  206309 main.go:141] libmachine: (old-k8s-version-768580) 
	I0414 17:39:18.789898  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:29:79:d9 in network default
	I0414 17:39:18.790773  206309 main.go:141] libmachine: (old-k8s-version-768580) starting domain...
	I0414 17:39:18.790786  206309 main.go:141] libmachine: (old-k8s-version-768580) ensuring networks are active...
	I0414 17:39:18.790804  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:18.791686  206309 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network default is active
	I0414 17:39:18.792173  206309 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network mk-old-k8s-version-768580 is active
	I0414 17:39:18.792842  206309 main.go:141] libmachine: (old-k8s-version-768580) getting domain XML...
	I0414 17:39:18.793624  206309 main.go:141] libmachine: (old-k8s-version-768580) creating domain...
	I0414 17:39:20.496002  206309 main.go:141] libmachine: (old-k8s-version-768580) waiting for IP...
	I0414 17:39:20.496975  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:20.497531  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:20.497616  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:20.497535  206405 retry.go:31] will retry after 298.748484ms: waiting for domain to come up
	I0414 17:39:20.798283  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:20.801052  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:20.801082  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:20.800951  206405 retry.go:31] will retry after 367.847605ms: waiting for domain to come up
	I0414 17:39:21.170938  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:21.171515  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:21.171539  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:21.171467  206405 retry.go:31] will retry after 304.806188ms: waiting for domain to come up
	I0414 17:39:21.478131  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:21.478712  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:21.478736  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:21.478689  206405 retry.go:31] will retry after 386.238262ms: waiting for domain to come up
	I0414 17:39:21.866457  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:21.867083  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:21.867139  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:21.867056  206405 retry.go:31] will retry after 692.535131ms: waiting for domain to come up
	I0414 17:39:22.561209  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:22.561755  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:22.561787  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:22.561710  206405 retry.go:31] will retry after 808.587705ms: waiting for domain to come up
	I0414 17:39:23.372142  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:23.372773  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:23.372804  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:23.372754  206405 retry.go:31] will retry after 1.025779677s: waiting for domain to come up
	I0414 17:39:24.400567  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:24.401183  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:24.401210  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:24.401162  206405 retry.go:31] will retry after 1.396385721s: waiting for domain to come up
	I0414 17:39:25.798905  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:25.799341  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:25.799372  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:25.799293  206405 retry.go:31] will retry after 1.594554128s: waiting for domain to come up
	I0414 17:39:27.396432  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:27.396931  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:27.396982  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:27.396908  206405 retry.go:31] will retry after 1.517980618s: waiting for domain to come up
	I0414 17:39:28.916624  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:28.917116  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:28.917182  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:28.917121  206405 retry.go:31] will retry after 2.342570155s: waiting for domain to come up
	I0414 17:39:31.260961  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:31.261531  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:31.261554  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:31.261480  206405 retry.go:31] will retry after 2.750990988s: waiting for domain to come up
	I0414 17:39:34.013655  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:34.014246  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:34.014274  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:34.014211  206405 retry.go:31] will retry after 3.513109199s: waiting for domain to come up
	I0414 17:39:37.531826  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:37.532235  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:39:37.532261  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:39:37.532203  206405 retry.go:31] will retry after 3.887637864s: waiting for domain to come up
	I0414 17:39:41.423059  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.423639  206309 main.go:141] libmachine: (old-k8s-version-768580) found domain IP: 192.168.72.58
	I0414 17:39:41.423664  206309 main.go:141] libmachine: (old-k8s-version-768580) reserving static IP address...
	I0414 17:39:41.423688  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has current primary IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.424019  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"} in network mk-old-k8s-version-768580
	I0414 17:39:41.500117  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | Getting to WaitForSSH function...
	I0414 17:39:41.500149  206309 main.go:141] libmachine: (old-k8s-version-768580) reserved static IP address 192.168.72.58 for domain old-k8s-version-768580
	I0414 17:39:41.500160  206309 main.go:141] libmachine: (old-k8s-version-768580) waiting for SSH...
	I0414 17:39:41.503790  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.504277  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:41.504307  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.504542  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH client type: external
	I0414 17:39:41.504570  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa (-rw-------)
	I0414 17:39:41.504617  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:39:41.504632  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | About to run SSH command:
	I0414 17:39:41.504645  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | exit 0
	I0414 17:39:41.650429  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | SSH cmd err, output: <nil>: 
	I0414 17:39:41.650786  206309 main.go:141] libmachine: (old-k8s-version-768580) KVM machine creation complete
	I0414 17:39:41.651234  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:39:41.720559  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:41.720889  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:41.721094  206309 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 17:39:41.721168  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetState
	I0414 17:39:41.722982  206309 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 17:39:41.722998  206309 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 17:39:41.723006  206309 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 17:39:41.723015  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:41.726216  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.726577  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:41.726600  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.727533  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:41.727717  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.727870  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.728007  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:41.728195  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:41.728469  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:41.728482  206309 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 17:39:41.833225  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:39:41.833253  206309 main.go:141] libmachine: Detecting the provisioner...
	I0414 17:39:41.833265  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:41.835829  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.836187  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:41.836219  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.836342  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:41.836500  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.836683  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.836833  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:41.836999  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:41.837214  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:41.837226  206309 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 17:39:41.947434  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 17:39:41.947515  206309 main.go:141] libmachine: found compatible host: buildroot
	I0414 17:39:41.947522  206309 main.go:141] libmachine: Provisioning with buildroot...
	I0414 17:39:41.947529  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:39:41.947776  206309 buildroot.go:166] provisioning hostname "old-k8s-version-768580"
	I0414 17:39:41.947808  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:39:41.947998  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:41.952149  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.953630  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:41.953669  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:41.953955  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:41.954175  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.954317  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:41.954463  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:41.954605  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:41.954843  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:41.954859  206309 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-768580 && echo "old-k8s-version-768580" | sudo tee /etc/hostname
	I0414 17:39:42.077742  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-768580
	
	I0414 17:39:42.077779  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:42.393444  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.393900  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:42.393927  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.394066  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:42.394268  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:42.394415  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:42.394551  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:42.394719  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:42.395039  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:42.395066  206309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-768580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-768580/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-768580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:39:42.520647  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:39:42.520676  206309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:39:42.520723  206309 buildroot.go:174] setting up certificates
	I0414 17:39:42.520739  206309 provision.go:84] configureAuth start
	I0414 17:39:42.520754  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:39:42.521063  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:39:42.524518  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.524892  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:42.524916  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.525081  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:42.527885  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.528213  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:42.528232  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.528447  206309 provision.go:143] copyHostCerts
	I0414 17:39:42.528508  206309 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:39:42.528538  206309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:39:42.528644  206309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:39:42.528776  206309 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:39:42.528790  206309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:39:42.528837  206309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:39:42.528924  206309 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:39:42.528936  206309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:39:42.528972  206309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:39:42.529047  206309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-768580 san=[127.0.0.1 192.168.72.58 localhost minikube old-k8s-version-768580]
	I0414 17:39:42.842504  206309 provision.go:177] copyRemoteCerts
	I0414 17:39:42.842558  206309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:39:42.842589  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:42.845945  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.846351  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:42.846378  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:42.846579  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:42.846765  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:42.846933  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:42.847052  206309 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:39:42.933045  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:39:42.963827  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 17:39:42.992676  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 17:39:43.022320  206309 provision.go:87] duration metric: took 501.565888ms to configureAuth
	I0414 17:39:43.022341  206309 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:39:43.022479  206309 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:39:43.022585  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.026086  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.026455  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.026478  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.026607  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.026808  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.026978  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.027134  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.027281  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:43.027458  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:43.027470  206309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:39:43.289257  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:39:43.289285  206309 main.go:141] libmachine: Checking connection to Docker...
	I0414 17:39:43.289296  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetURL
	I0414 17:39:43.291038  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | using libvirt version 6000000
	I0414 17:39:43.293844  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.294256  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.294279  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.294429  206309 main.go:141] libmachine: Docker is up and running!
	I0414 17:39:43.294457  206309 main.go:141] libmachine: Reticulating splines...
	I0414 17:39:43.294467  206309 client.go:171] duration metric: took 25.29694666s to LocalClient.Create
	I0414 17:39:43.294490  206309 start.go:167] duration metric: took 25.297021937s to libmachine.API.Create "old-k8s-version-768580"
	I0414 17:39:43.294499  206309 start.go:293] postStartSetup for "old-k8s-version-768580" (driver="kvm2")
	I0414 17:39:43.294512  206309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:39:43.294539  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.294780  206309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:39:43.294808  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.297542  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.297849  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.297874  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.297992  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.298183  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.298332  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.298480  206309 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:39:43.387489  206309 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:39:43.393038  206309 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:39:43.393065  206309 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:39:43.393148  206309 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:39:43.393256  206309 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:39:43.393393  206309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:39:43.403782  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:39:43.431753  206309 start.go:296] duration metric: took 137.238727ms for postStartSetup
	I0414 17:39:43.431803  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:39:43.432345  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:39:43.435200  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.435632  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.435656  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.435905  206309 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:39:43.436065  206309 start.go:128] duration metric: took 25.459688147s to createHost
	I0414 17:39:43.436084  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.438591  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.438941  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.438967  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.439138  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.439342  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.439518  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.439686  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.439895  206309 main.go:141] libmachine: Using SSH client type: native
	I0414 17:39:43.440160  206309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:39:43.440180  206309 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:39:43.546702  206309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652383.520454048
	
	I0414 17:39:43.546726  206309 fix.go:216] guest clock: 1744652383.520454048
	I0414 17:39:43.546735  206309 fix.go:229] Guest: 2025-04-14 17:39:43.520454048 +0000 UTC Remote: 2025-04-14 17:39:43.436074629 +0000 UTC m=+29.751364015 (delta=84.379419ms)
	I0414 17:39:43.546765  206309 fix.go:200] guest clock delta is within tolerance: 84.379419ms
	I0414 17:39:43.546772  206309 start.go:83] releasing machines lock for "old-k8s-version-768580", held for 25.570551162s
	I0414 17:39:43.546801  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.547104  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:39:43.550401  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.550768  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.550797  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.550932  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.551471  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.551658  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:39:43.551750  206309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:39:43.551789  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.551892  206309 ssh_runner.go:195] Run: cat /version.json
	I0414 17:39:43.551916  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:39:43.554584  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.554847  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.554892  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.554917  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.555095  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.555259  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.555297  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:43.555321  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:43.555437  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.555499  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:39:43.555568  206309 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:39:43.555586  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:39:43.555669  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:39:43.555764  206309 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:39:43.671272  206309 ssh_runner.go:195] Run: systemctl --version
	I0414 17:39:43.681482  206309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:39:43.851355  206309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:39:43.858204  206309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:39:43.858287  206309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:39:43.878012  206309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:39:43.878034  206309 start.go:495] detecting cgroup driver to use...
	I0414 17:39:43.878118  206309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:39:43.903925  206309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:39:43.926852  206309 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:39:43.926926  206309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:39:43.947243  206309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:39:43.966830  206309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:39:44.156390  206309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:39:44.528090  206309 docker.go:233] disabling docker service ...
	I0414 17:39:44.528164  206309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:39:44.602105  206309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:39:44.629760  206309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:39:44.831044  206309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:39:44.985015  206309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:39:45.000517  206309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:39:45.030658  206309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 17:39:45.030738  206309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:39:45.041783  206309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:39:45.041880  206309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:39:45.052915  206309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:39:45.064386  206309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:39:45.077283  206309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:39:45.088191  206309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:39:45.098248  206309 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:39:45.098290  206309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:39:45.112555  206309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:39:45.124069  206309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:39:45.253393  206309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:39:45.392843  206309 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:39:45.392921  206309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:39:45.399668  206309 start.go:563] Will wait 60s for crictl version
	I0414 17:39:45.399730  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:45.404903  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:39:45.454416  206309 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:39:45.454515  206309 ssh_runner.go:195] Run: crio --version
	I0414 17:39:45.492346  206309 ssh_runner.go:195] Run: crio --version
	I0414 17:39:45.532360  206309 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 17:39:45.533486  206309 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:39:45.537286  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:45.537978  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:39:45.538007  206309 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:39:45.538265  206309 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 17:39:45.545601  206309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:39:45.573179  206309 kubeadm.go:883] updating cluster {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:39:45.573309  206309 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:39:45.573374  206309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:39:45.627730  206309 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:39:45.627801  206309 ssh_runner.go:195] Run: which lz4
	I0414 17:39:45.633064  206309 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:39:45.639003  206309 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:39:45.639033  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 17:39:47.899622  206309 crio.go:462] duration metric: took 2.26662892s to copy over tarball
	I0414 17:39:47.899745  206309 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:39:50.954958  206309 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.055176713s)
	I0414 17:39:50.954990  206309 crio.go:469] duration metric: took 3.055335621s to extract the tarball
	I0414 17:39:50.955022  206309 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:39:51.019112  206309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:39:51.082680  206309 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:39:51.082702  206309 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 17:39:51.082749  206309 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:39:51.082767  206309 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:39:51.082962  206309 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 17:39:51.082981  206309 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:39:51.083010  206309 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:39:51.082970  206309 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:39:51.083155  206309 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 17:39:51.083538  206309 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:39:51.084253  206309 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:39:51.085890  206309 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:39:51.086002  206309 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 17:39:51.086239  206309 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:39:51.086334  206309 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:39:51.086445  206309 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:39:51.088104  206309 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:39:51.088399  206309 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 17:39:51.225192  206309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 17:39:51.225193  206309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 17:39:51.227336  206309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:39:51.228230  206309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:39:51.229798  206309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:39:51.230315  206309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:39:51.246308  206309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 17:39:51.409327  206309 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 17:39:51.409349  206309 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 17:39:51.409376  206309 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 17:39:51.409387  206309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:39:51.409387  206309 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:39:51.409405  206309 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 17:39:51.409408  206309 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 17:39:51.409430  206309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:39:51.409436  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:51.409440  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:51.409447  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:51.409456  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:51.433078  206309 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 17:39:51.433139  206309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:39:51.433168  206309 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 17:39:51.433193  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:51.433200  206309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:39:51.433240  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:51.439135  206309 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 17:39:51.439160  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:39:51.439170  206309 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 17:39:51.439224  206309 ssh_runner.go:195] Run: which crictl
	I0414 17:39:51.439282  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:39:51.439295  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:39:51.439360  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:39:51.442543  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:39:51.444143  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:39:51.593190  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:39:51.593421  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:39:51.612429  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:39:51.612525  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:39:51.612602  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:39:51.612672  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:39:51.612712  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:39:51.735373  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:39:51.735639  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:39:51.770519  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:39:51.770519  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:39:51.778792  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:39:51.778851  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:39:51.778901  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:39:51.916148  206309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:39:51.916170  206309 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 17:39:51.939350  206309 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 17:39:51.939413  206309 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 17:39:51.961049  206309 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 17:39:51.961108  206309 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 17:39:51.961145  206309 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 17:39:51.981763  206309 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 17:39:52.100174  206309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:39:52.240553  206309 cache_images.go:92] duration metric: took 1.157833313s to LoadCachedImages
	W0414 17:39:52.240669  206309 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0414 17:39:52.240692  206309 kubeadm.go:934] updating node { 192.168.72.58 8443 v1.20.0 crio true true} ...
	I0414 17:39:52.240851  206309 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-768580 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:39:52.240934  206309 ssh_runner.go:195] Run: crio config
	I0414 17:39:52.294911  206309 cni.go:84] Creating CNI manager for ""
	I0414 17:39:52.294952  206309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:39:52.294969  206309 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:39:52.294996  206309 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.58 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-768580 NodeName:old-k8s-version-768580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 17:39:52.295169  206309 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-768580"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:39:52.295240  206309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 17:39:52.305980  206309 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:39:52.306047  206309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:39:52.317069  206309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0414 17:39:52.336081  206309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:39:52.354224  206309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 17:39:52.373660  206309 ssh_runner.go:195] Run: grep 192.168.72.58	control-plane.minikube.internal$ /etc/hosts
	I0414 17:39:52.378934  206309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:39:52.394742  206309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:39:52.551228  206309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:39:52.573012  206309 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580 for IP: 192.168.72.58
	I0414 17:39:52.573034  206309 certs.go:194] generating shared ca certs ...
	I0414 17:39:52.573050  206309 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:52.573226  206309 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:39:52.573280  206309 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:39:52.573293  206309 certs.go:256] generating profile certs ...
	I0414 17:39:52.573361  206309 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.key
	I0414 17:39:52.573382  206309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.crt with IP's: []
	I0414 17:39:52.792482  206309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.crt ...
	I0414 17:39:52.792513  206309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.crt: {Name:mk8b565709e9f8ca97c9a4b1a9f5acd5ea05c4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:52.829084  206309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.key ...
	I0414 17:39:52.829127  206309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.key: {Name:mk52dd8773c8c55cd3fca6cbbe90eecea9a8988a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:52.829265  206309 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key.0f5f550a
	I0414 17:39:52.829286  206309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt.0f5f550a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.58]
	I0414 17:39:53.132546  206309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt.0f5f550a ...
	I0414 17:39:53.132579  206309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt.0f5f550a: {Name:mk4927613f2d2a5d70279b14a038fff00965a4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:53.132753  206309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key.0f5f550a ...
	I0414 17:39:53.132773  206309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key.0f5f550a: {Name:mk050758690dfc7fe46b11bc5076f7d397c27980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:53.132886  206309 certs.go:381] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt.0f5f550a -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt
	I0414 17:39:53.132979  206309 certs.go:385] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key.0f5f550a -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key
	I0414 17:39:53.133085  206309 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key
	I0414 17:39:53.133110  206309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.crt with IP's: []
	I0414 17:39:53.317041  206309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.crt ...
	I0414 17:39:53.317075  206309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.crt: {Name:mk397457229ba89e10f48579a72181e917a0fda2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:53.317259  206309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key ...
	I0414 17:39:53.317277  206309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key: {Name:mk42997316fa111458b9a6ec311841e9d07e7e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:39:53.317481  206309 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:39:53.317525  206309 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:39:53.317537  206309 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:39:53.317559  206309 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:39:53.317581  206309 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:39:53.317601  206309 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:39:53.317638  206309 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:39:53.318201  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:39:53.347651  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:39:53.373696  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:39:53.407644  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:39:53.451777  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 17:39:53.485639  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:39:53.530893  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:39:53.557713  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 17:39:53.590437  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:39:53.618860  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:39:53.646052  206309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:39:53.672341  206309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:39:53.690529  206309 ssh_runner.go:195] Run: openssl version
	I0414 17:39:53.696380  206309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:39:53.706862  206309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:39:53.711541  206309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:39:53.711593  206309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:39:53.717430  206309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:39:53.728150  206309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:39:53.744238  206309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:39:53.748999  206309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:39:53.749049  206309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:39:53.755573  206309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:39:53.767120  206309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:39:53.778471  206309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:39:53.783396  206309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:39:53.783446  206309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:39:53.789381  206309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:39:53.801452  206309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:39:53.805633  206309 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 17:39:53.805676  206309 kubeadm.go:392] StartCluster: {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:39:53.805737  206309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:39:53.805797  206309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:39:53.845948  206309 cri.go:89] found id: ""
	I0414 17:39:53.846023  206309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:39:53.855666  206309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:39:53.865017  206309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:39:53.874162  206309 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:39:53.874184  206309 kubeadm.go:157] found existing configuration files:
	
	I0414 17:39:53.874222  206309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:39:53.883558  206309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:39:53.883611  206309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:39:53.892474  206309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:39:53.901219  206309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:39:53.901293  206309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:39:53.910693  206309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:39:53.919775  206309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:39:53.919833  206309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:39:53.928940  206309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:39:53.937440  206309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:39:53.937488  206309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:39:53.946281  206309 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:39:54.059833  206309 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:39:54.060065  206309 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:39:54.208911  206309 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:39:54.209111  206309 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:39:54.209262  206309 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:39:54.425910  206309 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:39:54.427837  206309 out.go:235]   - Generating certificates and keys ...
	I0414 17:39:54.427939  206309 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:39:54.428024  206309 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:39:54.503982  206309 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 17:39:54.625623  206309 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 17:39:54.756443  206309 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 17:39:54.915076  206309 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 17:39:55.161590  206309 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 17:39:55.164486  206309 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-768580] and IPs [192.168.72.58 127.0.0.1 ::1]
	I0414 17:39:55.442602  206309 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 17:39:55.448742  206309 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-768580] and IPs [192.168.72.58 127.0.0.1 ::1]
	I0414 17:39:55.552538  206309 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 17:39:55.604447  206309 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 17:39:55.711716  206309 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 17:39:55.712077  206309 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:39:55.851022  206309 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:39:56.036736  206309 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:39:56.249486  206309 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:39:56.469019  206309 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:39:56.487822  206309 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:39:56.487979  206309 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:39:56.488359  206309 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:39:56.632258  206309 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:39:56.634014  206309 out.go:235]   - Booting up control plane ...
	I0414 17:39:56.634151  206309 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:39:56.641961  206309 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:39:56.643439  206309 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:39:56.644497  206309 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:39:56.649647  206309 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:40:36.645012  206309 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:40:36.646137  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:40:36.646397  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:40:41.646714  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:40:41.646985  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:40:51.646170  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:40:51.646398  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:41:11.645884  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:41:11.646149  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:41:51.648073  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:41:51.648601  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:41:51.648662  206309 kubeadm.go:310] 
	I0414 17:41:51.648766  206309 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:41:51.648865  206309 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:41:51.648877  206309 kubeadm.go:310] 
	I0414 17:41:51.648984  206309 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:41:51.649077  206309 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:41:51.649323  206309 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:41:51.649334  206309 kubeadm.go:310] 
	I0414 17:41:51.649545  206309 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:41:51.649633  206309 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:41:51.649714  206309 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:41:51.649730  206309 kubeadm.go:310] 
	I0414 17:41:51.649995  206309 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:41:51.650180  206309 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:41:51.650197  206309 kubeadm.go:310] 
	I0414 17:41:51.650426  206309 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:41:51.650570  206309 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:41:51.650671  206309 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:41:51.650940  206309 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:41:51.650977  206309 kubeadm.go:310] 
	I0414 17:41:51.651349  206309 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:41:51.651599  206309 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:41:51.652086  206309 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 17:41:51.652210  206309 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-768580] and IPs [192.168.72.58 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-768580] and IPs [192.168.72.58 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-768580] and IPs [192.168.72.58 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-768580] and IPs [192.168.72.58 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 17:41:51.652253  206309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:41:53.756939  206309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.104664248s)
	I0414 17:41:53.757011  206309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:41:53.774517  206309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:41:53.784836  206309 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:41:53.784859  206309 kubeadm.go:157] found existing configuration files:
	
	I0414 17:41:53.784913  206309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:41:53.794788  206309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:41:53.794852  206309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:41:53.804698  206309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:41:53.813875  206309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:41:53.813911  206309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:41:53.823405  206309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:41:53.832593  206309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:41:53.832679  206309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:41:53.842043  206309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:41:53.851036  206309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:41:53.851069  206309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:41:53.860340  206309 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:41:54.074132  206309 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:43:50.183690  206309 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:43:50.183827  206309 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 17:43:50.185577  206309 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:43:50.185639  206309 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:43:50.185741  206309 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:43:50.185893  206309 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:43:50.186053  206309 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:43:50.186148  206309 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:43:50.188412  206309 out.go:235]   - Generating certificates and keys ...
	I0414 17:43:50.188524  206309 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:43:50.188638  206309 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:43:50.188758  206309 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:43:50.188859  206309 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:43:50.188964  206309 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:43:50.189068  206309 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:43:50.189157  206309 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:43:50.189248  206309 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:43:50.189316  206309 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:43:50.189422  206309 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:43:50.189481  206309 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:43:50.189556  206309 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:43:50.189625  206309 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:43:50.189701  206309 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:43:50.189797  206309 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:43:50.189901  206309 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:43:50.190031  206309 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:43:50.190146  206309 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:43:50.190200  206309 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:43:50.190288  206309 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:43:50.191806  206309 out.go:235]   - Booting up control plane ...
	I0414 17:43:50.191910  206309 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:43:50.192028  206309 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:43:50.192129  206309 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:43:50.192239  206309 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:43:50.192443  206309 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:43:50.192522  206309 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:43:50.192614  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:43:50.192873  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:43:50.192968  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:43:50.193209  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:43:50.193314  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:43:50.193585  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:43:50.193696  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:43:50.193997  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:43:50.194133  206309 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:43:50.194385  206309 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:43:50.194394  206309 kubeadm.go:310] 
	I0414 17:43:50.194439  206309 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:43:50.194511  206309 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:43:50.194524  206309 kubeadm.go:310] 
	I0414 17:43:50.194553  206309 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:43:50.194583  206309 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:43:50.194671  206309 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:43:50.194680  206309 kubeadm.go:310] 
	I0414 17:43:50.194770  206309 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:43:50.194812  206309 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:43:50.194859  206309 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:43:50.194867  206309 kubeadm.go:310] 
	I0414 17:43:50.194971  206309 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:43:50.195106  206309 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:43:50.195118  206309 kubeadm.go:310] 
	I0414 17:43:50.195289  206309 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:43:50.195439  206309 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:43:50.195551  206309 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:43:50.195651  206309 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:43:50.195672  206309 kubeadm.go:310] 
	I0414 17:43:50.195732  206309 kubeadm.go:394] duration metric: took 3m56.390060019s to StartCluster
	I0414 17:43:50.195792  206309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:43:50.195856  206309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:43:50.249637  206309 cri.go:89] found id: ""
	I0414 17:43:50.249668  206309 logs.go:282] 0 containers: []
	W0414 17:43:50.249679  206309 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:43:50.249687  206309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:43:50.249745  206309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:43:50.293936  206309 cri.go:89] found id: ""
	I0414 17:43:50.293966  206309 logs.go:282] 0 containers: []
	W0414 17:43:50.293976  206309 logs.go:284] No container was found matching "etcd"
	I0414 17:43:50.293983  206309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:43:50.294041  206309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:43:50.329596  206309 cri.go:89] found id: ""
	I0414 17:43:50.329626  206309 logs.go:282] 0 containers: []
	W0414 17:43:50.329637  206309 logs.go:284] No container was found matching "coredns"
	I0414 17:43:50.329645  206309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:43:50.329711  206309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:43:50.368166  206309 cri.go:89] found id: ""
	I0414 17:43:50.368198  206309 logs.go:282] 0 containers: []
	W0414 17:43:50.368211  206309 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:43:50.368219  206309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:43:50.368288  206309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:43:50.400868  206309 cri.go:89] found id: ""
	I0414 17:43:50.400897  206309 logs.go:282] 0 containers: []
	W0414 17:43:50.400910  206309 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:43:50.400922  206309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:43:50.400988  206309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:43:50.435204  206309 cri.go:89] found id: ""
	I0414 17:43:50.435231  206309 logs.go:282] 0 containers: []
	W0414 17:43:50.435241  206309 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:43:50.435249  206309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:43:50.435311  206309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:43:50.471048  206309 cri.go:89] found id: ""
	I0414 17:43:50.471074  206309 logs.go:282] 0 containers: []
	W0414 17:43:50.471082  206309 logs.go:284] No container was found matching "kindnet"
	I0414 17:43:50.471091  206309 logs.go:123] Gathering logs for kubelet ...
	I0414 17:43:50.471106  206309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:43:50.523379  206309 logs.go:123] Gathering logs for dmesg ...
	I0414 17:43:50.523412  206309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:43:50.536422  206309 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:43:50.536452  206309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:43:50.709803  206309 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:43:50.709839  206309 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:43:50.709857  206309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:43:50.876341  206309 logs.go:123] Gathering logs for container status ...
	I0414 17:43:50.876381  206309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 17:43:50.927118  206309 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 17:43:50.927170  206309 out.go:270] * 
	* 
	W0414 17:43:50.927230  206309 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:43:50.927247  206309 out.go:270] * 
	* 
	W0414 17:43:50.928334  206309 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 17:43:50.931905  206309 out.go:201] 
	W0414 17:43:50.933510  206309 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:43:50.933554  206309 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 17:43:50.933580  206309 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 17:43:50.935012  206309 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-768580 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 6 (267.940407ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 17:43:51.251051  212839 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-768580" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-768580" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (277.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-768580 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-768580 create -f testdata/busybox.yaml: exit status 1 (57.167367ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-768580" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-768580 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 6 (266.080029ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 17:43:51.572055  212893 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-768580" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-768580" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 6 (305.118134ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 17:43:51.879688  212923 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-768580" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-768580" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (87.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-768580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0414 17:43:55.471564  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:57.758648  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:05.713011  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:08.605675  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:08.612023  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:08.623393  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:08.644742  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:08.686056  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:08.767438  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:08.929600  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:09.251757  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:09.893929  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:11.175537  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:13.736914  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:14.000643  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:18.858537  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:26.194370  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:29.100471  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:29.642479  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:42.840654  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:42.846989  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:42.858407  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:42.880373  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:42.921712  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:43.003140  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:43.164930  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:43.486202  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:44.128188  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:45.410469  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:47.972654  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:49.582550  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:53.094548  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:44:54.170212  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:03.336256  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:07.009733  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:07.156269  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-768580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m27.570294772s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-768580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-768580 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-768580 describe deploy/metrics-server -n kube-system: exit status 1 (44.886793ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-768580" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-768580 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 6 (237.926948ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 17:45:19.739082  213482 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-768580" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-768580" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (87.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (527.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-768580 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0414 17:45:23.818294  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:30.544250  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:45:35.922221  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:46:04.780588  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:46:13.897907  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:46:29.077611  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:46:41.600475  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:46:45.781399  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:46:52.466301  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:47:08.946584  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:47:13.484175  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:47:23.150355  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:47:26.702467  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:47:50.851964  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:47:52.063156  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:48:19.763846  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:48:31.089780  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:48:45.219554  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-768580 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m45.464057449s)

                                                
                                                
-- stdout --
	* [old-k8s-version-768580] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-768580" primary control-plane node in "old-k8s-version-768580" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-768580" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:45:23.282546  213635 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:45:23.282636  213635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:23.282647  213635 out.go:358] Setting ErrFile to fd 2...
	I0414 17:45:23.282663  213635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:23.282871  213635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:45:23.283429  213635 out.go:352] Setting JSON to false
	I0414 17:45:23.284348  213635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8821,"bootTime":1744643902,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:45:23.284402  213635 start.go:139] virtualization: kvm guest
	I0414 17:45:23.286322  213635 out.go:177] * [old-k8s-version-768580] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:45:23.287426  213635 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:45:23.287431  213635 notify.go:220] Checking for updates...
	I0414 17:45:23.289881  213635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:45:23.291059  213635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:45:23.292002  213635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:45:23.293350  213635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:45:23.294814  213635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:45:23.296431  213635 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:45:23.296945  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:23.296998  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:23.313119  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0414 17:45:23.313580  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:23.314124  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:23.314148  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:23.314493  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:23.314664  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:23.316572  213635 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 17:45:23.317553  213635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:45:23.317841  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:23.317876  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:23.333791  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0414 17:45:23.334298  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:23.334832  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:23.334859  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:23.335206  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:23.335410  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:23.372523  213635 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 17:45:23.373766  213635 start.go:297] selected driver: kvm2
	I0414 17:45:23.373785  213635 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:23.373971  213635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:45:23.374697  213635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:45:23.374756  213635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:45:23.390328  213635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:45:23.390891  213635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:45:23.390939  213635 cni.go:84] Creating CNI manager for ""
	I0414 17:45:23.390997  213635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:23.391057  213635 start.go:340] cluster config:
	{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:23.391177  213635 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:45:23.393503  213635 out.go:177] * Starting "old-k8s-version-768580" primary control-plane node in "old-k8s-version-768580" cluster
	I0414 17:45:23.394590  213635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:45:23.394621  213635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 17:45:23.394628  213635 cache.go:56] Caching tarball of preloaded images
	I0414 17:45:23.394721  213635 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:45:23.394735  213635 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 17:45:23.394836  213635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:45:23.395013  213635 start.go:360] acquireMachinesLock for old-k8s-version-768580: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:45:38.262514  213635 start.go:364] duration metric: took 14.867477628s to acquireMachinesLock for "old-k8s-version-768580"
	I0414 17:45:38.262567  213635 start.go:96] Skipping create...Using existing machine configuration
	I0414 17:45:38.262576  213635 fix.go:54] fixHost starting: 
	I0414 17:45:38.262931  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:38.262975  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:38.282724  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0414 17:45:38.283218  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:38.283779  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:38.283810  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:38.284194  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:38.284403  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:38.284564  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetState
	I0414 17:45:38.285903  213635 fix.go:112] recreateIfNeeded on old-k8s-version-768580: state=Stopped err=<nil>
	I0414 17:45:38.285937  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	W0414 17:45:38.286051  213635 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 17:45:38.287537  213635 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-768580" ...
	I0414 17:45:38.288730  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .Start
	I0414 17:45:38.288903  213635 main.go:141] libmachine: (old-k8s-version-768580) starting domain...
	I0414 17:45:38.288928  213635 main.go:141] libmachine: (old-k8s-version-768580) ensuring networks are active...
	I0414 17:45:38.289671  213635 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network default is active
	I0414 17:45:38.290082  213635 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network mk-old-k8s-version-768580 is active
	I0414 17:45:38.290509  213635 main.go:141] libmachine: (old-k8s-version-768580) getting domain XML...
	I0414 17:45:38.291270  213635 main.go:141] libmachine: (old-k8s-version-768580) creating domain...
	I0414 17:45:39.584359  213635 main.go:141] libmachine: (old-k8s-version-768580) waiting for IP...
	I0414 17:45:39.585518  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:39.586108  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:39.586195  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:39.586107  213733 retry.go:31] will retry after 251.417692ms: waiting for domain to come up
	I0414 17:45:39.839778  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:39.840371  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:39.840397  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:39.840338  213733 retry.go:31] will retry after 258.330025ms: waiting for domain to come up
	I0414 17:45:40.100989  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:40.101667  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:40.101696  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:40.101631  213733 retry.go:31] will retry after 334.368733ms: waiting for domain to come up
	I0414 17:45:40.437266  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:40.438218  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:40.438251  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:40.438188  213733 retry.go:31] will retry after 588.313555ms: waiting for domain to come up
	I0414 17:45:41.027969  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:41.028685  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:41.028713  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:41.028667  213733 retry.go:31] will retry after 582.787602ms: waiting for domain to come up
	I0414 17:45:41.613756  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:41.614424  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:41.614476  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:41.614383  213733 retry.go:31] will retry after 695.01431ms: waiting for domain to come up
	I0414 17:45:42.311573  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:42.312134  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:42.312168  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:42.312092  213733 retry.go:31] will retry after 1.050124039s: waiting for domain to come up
	I0414 17:45:43.363977  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:43.364593  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:43.364642  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:43.364568  213733 retry.go:31] will retry after 1.011314768s: waiting for domain to come up
	I0414 17:45:44.377753  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:44.378268  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:44.378293  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:44.378225  213733 retry.go:31] will retry after 1.856494831s: waiting for domain to come up
	I0414 17:45:46.237268  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:46.237851  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:46.237881  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:46.237785  213733 retry.go:31] will retry after 1.749079149s: waiting for domain to come up
	I0414 17:45:47.990039  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:47.990637  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:47.990670  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:47.990601  213733 retry.go:31] will retry after 2.63350321s: waiting for domain to come up
	I0414 17:45:50.626885  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:50.627340  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:50.627368  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:50.627294  213733 retry.go:31] will retry after 2.57658473s: waiting for domain to come up
	I0414 17:45:53.207057  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:53.207562  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:53.207590  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:53.207520  213733 retry.go:31] will retry after 3.448748827s: waiting for domain to come up
	I0414 17:45:56.658750  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.659197  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has current primary IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.659235  213635 main.go:141] libmachine: (old-k8s-version-768580) found domain IP: 192.168.72.58
	I0414 17:45:56.659245  213635 main.go:141] libmachine: (old-k8s-version-768580) reserving static IP address...
	I0414 17:45:56.659616  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.659642  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | skip adding static IP to network mk-old-k8s-version-768580 - found existing host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"}
	I0414 17:45:56.659654  213635 main.go:141] libmachine: (old-k8s-version-768580) reserved static IP address 192.168.72.58 for domain old-k8s-version-768580
	I0414 17:45:56.659671  213635 main.go:141] libmachine: (old-k8s-version-768580) waiting for SSH...
	I0414 17:45:56.659708  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Getting to WaitForSSH function...
	I0414 17:45:56.661714  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.662056  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.662087  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.662202  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH client type: external
	I0414 17:45:56.662226  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa (-rw-------)
	I0414 17:45:56.662273  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:45:56.662292  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | About to run SSH command:
	I0414 17:45:56.662309  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | exit 0
	I0414 17:45:56.781680  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | SSH cmd err, output: <nil>: 
	I0414 17:45:56.782109  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:45:56.782751  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:56.785158  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.785469  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.785502  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.785736  213635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:45:56.785961  213635 machine.go:93] provisionDockerMachine start ...
	I0414 17:45:56.785980  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:56.786175  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:56.788189  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.788560  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.788585  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.788720  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:56.788874  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.789008  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.789162  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:56.789316  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:56.789519  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:56.789529  213635 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:45:56.890137  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 17:45:56.890168  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:56.890394  213635 buildroot.go:166] provisioning hostname "old-k8s-version-768580"
	I0414 17:45:56.890418  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:56.890619  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:56.892966  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.893390  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.893410  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.893563  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:56.893750  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.893919  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.894061  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:56.894207  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:56.894529  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:56.894549  213635 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-768580 && echo "old-k8s-version-768580" | sudo tee /etc/hostname
	I0414 17:45:57.008447  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-768580
	
	I0414 17:45:57.008471  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.011111  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.011428  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.011469  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.011584  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.011804  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.011985  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.012096  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.012205  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:57.012392  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:57.012407  213635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-768580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-768580/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-768580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:45:57.132689  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:45:57.132739  213635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:45:57.132763  213635 buildroot.go:174] setting up certificates
	I0414 17:45:57.132773  213635 provision.go:84] configureAuth start
	I0414 17:45:57.132784  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:57.133116  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:57.136014  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.136345  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.136374  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.136550  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.139546  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.140028  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.140059  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.140266  213635 provision.go:143] copyHostCerts
	I0414 17:45:57.140335  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:45:57.140361  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:45:57.140462  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:45:57.140589  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:45:57.140603  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:45:57.140655  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:45:57.140743  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:45:57.140761  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:45:57.140798  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:45:57.140884  213635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-768580 san=[127.0.0.1 192.168.72.58 localhost minikube old-k8s-version-768580]
	I0414 17:45:57.638227  213635 provision.go:177] copyRemoteCerts
	I0414 17:45:57.638317  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:45:57.638348  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.641173  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.641530  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.641563  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.641714  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.641916  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.642092  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.642232  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:57.724240  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:45:57.749634  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 17:45:57.776416  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 17:45:57.801692  213635 provision.go:87] duration metric: took 668.902854ms to configureAuth
	I0414 17:45:57.801722  213635 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:45:57.801958  213635 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:45:57.802054  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.804673  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.805023  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.805051  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.805250  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.805434  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.805597  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.805715  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.805892  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:57.806134  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:57.806153  213635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:45:58.022403  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:45:58.022437  213635 machine.go:96] duration metric: took 1.236460782s to provisionDockerMachine
	I0414 17:45:58.022452  213635 start.go:293] postStartSetup for "old-k8s-version-768580" (driver="kvm2")
	I0414 17:45:58.022466  213635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:45:58.022505  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.022841  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:45:58.022875  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.025802  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.026223  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.026254  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.026507  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.026657  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.026765  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.026909  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.112706  213635 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:45:58.117225  213635 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:45:58.117253  213635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:45:58.117324  213635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:45:58.117416  213635 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:45:58.117503  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:45:58.128036  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:58.152497  213635 start.go:296] duration metric: took 130.019138ms for postStartSetup
	I0414 17:45:58.152538  213635 fix.go:56] duration metric: took 19.889962017s for fixHost
	I0414 17:45:58.152587  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.155565  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.156016  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.156050  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.156233  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.156440  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.156667  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.156863  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.157079  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:58.157365  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:58.157380  213635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:45:58.262578  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652758.231554158
	
	I0414 17:45:58.262603  213635 fix.go:216] guest clock: 1744652758.231554158
	I0414 17:45:58.262612  213635 fix.go:229] Guest: 2025-04-14 17:45:58.231554158 +0000 UTC Remote: 2025-04-14 17:45:58.152542501 +0000 UTC m=+34.908827189 (delta=79.011657ms)
	I0414 17:45:58.262635  213635 fix.go:200] guest clock delta is within tolerance: 79.011657ms
	I0414 17:45:58.262641  213635 start.go:83] releasing machines lock for "old-k8s-version-768580", held for 20.000092548s
	I0414 17:45:58.262660  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.262963  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:58.265585  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.265964  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.266004  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.266157  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266649  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266849  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266978  213635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:45:58.267030  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.267047  213635 ssh_runner.go:195] Run: cat /version.json
	I0414 17:45:58.267073  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.269647  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.269715  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270071  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.270098  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270124  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.270157  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270238  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.270344  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.270424  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.270497  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.270566  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.270678  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.270730  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.270836  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.343285  213635 ssh_runner.go:195] Run: systemctl --version
	I0414 17:45:58.367988  213635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:45:58.519539  213635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:45:58.526018  213635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:45:58.526083  213635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:45:58.542624  213635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:45:58.542648  213635 start.go:495] detecting cgroup driver to use...
	I0414 17:45:58.542718  213635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:45:58.558731  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:45:58.572169  213635 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:45:58.572211  213635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:45:58.585163  213635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:45:58.598940  213635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:45:58.721667  213635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:45:58.879281  213635 docker.go:233] disabling docker service ...
	I0414 17:45:58.879343  213635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:45:58.896126  213635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:45:58.908836  213635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:45:59.033428  213635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:45:59.166628  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:45:59.181684  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:45:59.200617  213635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 17:45:59.200680  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.211541  213635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:45:59.211600  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.223657  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.235487  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.248000  213635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:45:59.261365  213635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:45:59.273037  213635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:45:59.273132  213635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:45:59.288901  213635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:45:59.300042  213635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:59.423635  213635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:45:59.529685  213635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:45:59.529758  213635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:45:59.534592  213635 start.go:563] Will wait 60s for crictl version
	I0414 17:45:59.534640  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:45:59.538651  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:45:59.578522  213635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:45:59.578595  213635 ssh_runner.go:195] Run: crio --version
	I0414 17:45:59.605740  213635 ssh_runner.go:195] Run: crio --version
	I0414 17:45:59.635045  213635 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 17:45:59.636069  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:59.638462  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:59.638803  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:59.638829  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:59.639064  213635 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 17:45:59.643370  213635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:59.657222  213635 kubeadm.go:883] updating cluster {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:45:59.657362  213635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:45:59.657409  213635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:59.704172  213635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:45:59.704247  213635 ssh_runner.go:195] Run: which lz4
	I0414 17:45:59.708554  213635 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:45:59.712850  213635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:45:59.712882  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 17:46:01.354039  213635 crio.go:462] duration metric: took 1.645520081s to copy over tarball
	I0414 17:46:01.354112  213635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:46:04.261653  213635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.907516994s)
	I0414 17:46:04.261683  213635 crio.go:469] duration metric: took 2.907610683s to extract the tarball
	I0414 17:46:04.261695  213635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:46:04.307964  213635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:46:04.345077  213635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:46:04.345112  213635 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 17:46:04.345199  213635 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:04.345203  213635 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.345239  213635 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.345249  213635 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 17:46:04.345318  213635 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.345321  213635 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.345209  213635 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.345436  213635 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.347103  213635 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.347115  213635 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.347128  213635 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.347132  213635 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.347093  213635 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.347109  213635 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.347093  213635 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 17:46:04.347164  213635 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:04.489472  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.490905  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.494468  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.498887  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.499207  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.503007  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.528129  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 17:46:04.591926  213635 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 17:46:04.591983  213635 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.592033  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.628524  213635 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 17:46:04.628568  213635 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.628604  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691347  213635 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 17:46:04.691455  213635 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.691347  213635 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 17:46:04.691571  213635 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.691392  213635 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 17:46:04.691634  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691661  213635 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.691393  213635 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 17:46:04.691706  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691731  213635 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.691759  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691509  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.696665  213635 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 17:46:04.696697  213635 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 17:46:04.696714  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.696727  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.696730  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.707222  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.707277  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.709851  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.710042  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.834502  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:04.834653  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.834668  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.856960  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.857034  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.857094  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.857179  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.983051  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.983060  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.983060  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:05.024632  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:05.024779  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:05.031272  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:05.031399  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:05.161869  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 17:46:05.170557  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 17:46:05.170702  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:05.195041  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 17:46:05.195041  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 17:46:05.208270  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 17:46:05.208341  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 17:46:05.220290  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 17:46:05.331240  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:05.471903  213635 cache_images.go:92] duration metric: took 1.126766183s to LoadCachedImages
	W0414 17:46:05.471974  213635 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0414 17:46:05.471985  213635 kubeadm.go:934] updating node { 192.168.72.58 8443 v1.20.0 crio true true} ...
	I0414 17:46:05.472082  213635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-768580 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:46:05.472172  213635 ssh_runner.go:195] Run: crio config
	I0414 17:46:05.531642  213635 cni.go:84] Creating CNI manager for ""
	I0414 17:46:05.531667  213635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:46:05.531678  213635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:46:05.531697  213635 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.58 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-768580 NodeName:old-k8s-version-768580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 17:46:05.531815  213635 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-768580"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:46:05.531897  213635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 17:46:05.542769  213635 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:46:05.542861  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:46:05.552930  213635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0414 17:46:05.570087  213635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:46:05.588483  213635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 17:46:05.606443  213635 ssh_runner.go:195] Run: grep 192.168.72.58	control-plane.minikube.internal$ /etc/hosts
	I0414 17:46:05.610756  213635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:46:05.622873  213635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:46:05.770402  213635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:46:05.789353  213635 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580 for IP: 192.168.72.58
	I0414 17:46:05.789374  213635 certs.go:194] generating shared ca certs ...
	I0414 17:46:05.789395  213635 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:46:05.789542  213635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:46:05.789598  213635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:46:05.789613  213635 certs.go:256] generating profile certs ...
	I0414 17:46:05.789717  213635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.key
	I0414 17:46:05.789816  213635 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key.0f5f550a
	I0414 17:46:05.789911  213635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key
	I0414 17:46:05.790030  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:46:05.790067  213635 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:46:05.790077  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:46:05.790130  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:46:05.790163  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:46:05.790195  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:46:05.790251  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:46:05.790829  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:46:05.852348  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:46:05.879909  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:46:05.924274  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:46:05.968318  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 17:46:06.004046  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:46:06.039672  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:46:06.068041  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 17:46:06.093159  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:46:06.118949  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:46:06.144480  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:46:06.171159  213635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:46:06.189499  213635 ssh_runner.go:195] Run: openssl version
	I0414 17:46:06.196060  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:46:06.206864  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.211352  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.211407  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.217759  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:46:06.228546  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:46:06.239146  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.243457  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.243511  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.249141  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:46:06.259582  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:46:06.269988  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.275271  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.275324  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.282428  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:46:06.293404  213635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:46:06.298115  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:46:06.304513  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:46:06.310675  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:46:06.317218  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:46:06.324114  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:46:06.331759  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:46:06.337898  213635 kubeadm.go:392] StartCluster: {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:46:06.337991  213635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:46:06.338037  213635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:46:06.381282  213635 cri.go:89] found id: ""
	I0414 17:46:06.381351  213635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:46:06.392326  213635 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 17:46:06.392345  213635 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 17:46:06.392385  213635 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 17:46:06.402275  213635 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:46:06.403224  213635 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-768580" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:46:06.403594  213635 kubeconfig.go:62] /home/jenkins/minikube-integration/20349-149500/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-768580" cluster setting kubeconfig missing "old-k8s-version-768580" context setting]
	I0414 17:46:06.404086  213635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:46:06.460048  213635 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 17:46:06.470500  213635 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.58
	I0414 17:46:06.470535  213635 kubeadm.go:1160] stopping kube-system containers ...
	I0414 17:46:06.470546  213635 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 17:46:06.470624  213635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:46:06.509152  213635 cri.go:89] found id: ""
	I0414 17:46:06.509210  213635 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:46:06.526163  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:46:06.535901  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:46:06.535928  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:46:06.535978  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:46:06.545480  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:46:06.545535  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:46:06.554610  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:46:06.563294  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:46:06.563347  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:46:06.572284  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:46:06.581431  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:46:06.581475  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:46:06.591211  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:46:06.600340  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:46:06.600408  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:46:06.609494  213635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:46:06.618800  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:06.747191  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.478890  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.697670  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.793179  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.893891  213635 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:46:07.893971  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:08.394410  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:08.895002  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.395022  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:10.394996  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:10.894824  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:11.394638  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:11.894428  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:12.394452  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:12.894017  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:13.394405  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:13.894519  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:14.394847  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:14.894997  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:15.394630  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:15.895007  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:16.394831  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:16.894632  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:17.395016  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:17.894993  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:18.394976  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:18.895068  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:19.394434  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:19.894886  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:20.395037  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:20.895061  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:21.394429  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:21.894500  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:22.394822  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:22.895080  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:23.394953  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:23.894339  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:24.395018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:24.895037  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.394854  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.894984  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:26.395005  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:26.895007  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:27.395035  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:27.895034  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:28.394580  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:28.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.394479  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.894485  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:30.394483  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:30.894471  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:31.395020  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:31.895014  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:32.395034  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:32.895028  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:33.394018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:33.894501  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.394226  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.894064  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:35.394952  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:35.895016  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:36.394607  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:36.895006  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:37.394673  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:37.894995  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:38.394272  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:38.894875  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:39.394148  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:39.895036  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:40.394685  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:40.895010  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:41.394981  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:41.894634  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:42.394270  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:42.895029  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:43.394362  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:43.894756  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:44.395057  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:44.895022  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.394470  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.894701  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:46.395033  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:46.895033  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:47.394321  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:47.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:48.394554  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:48.894703  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.394432  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.894498  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:50.395063  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:50.894449  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:51.395000  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:51.895026  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:52.394891  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:52.894471  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:53.394778  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:53.894664  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.394089  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.894622  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:55.394495  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:55.894999  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:56.395001  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:56.894095  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:57.394283  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:57.894977  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:58.394681  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:58.895019  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:59.394738  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:59.894984  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:00.394802  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:00.894854  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:01.395049  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:01.895019  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:02.394977  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:02.894501  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:03.394365  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:03.895039  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:04.395027  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:04.894987  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:05.394716  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:05.894080  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:06.394955  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:06.894670  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:07.394902  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:07.894929  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:07.895008  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:07.936773  213635 cri.go:89] found id: ""
	I0414 17:47:07.936809  213635 logs.go:282] 0 containers: []
	W0414 17:47:07.936822  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:07.936830  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:07.936908  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:07.971073  213635 cri.go:89] found id: ""
	I0414 17:47:07.971104  213635 logs.go:282] 0 containers: []
	W0414 17:47:07.971113  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:07.971118  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:07.971171  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:08.010389  213635 cri.go:89] found id: ""
	I0414 17:47:08.010414  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.010422  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:08.010427  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:08.010482  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:08.044286  213635 cri.go:89] found id: ""
	I0414 17:47:08.044322  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.044334  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:08.044344  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:08.044413  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:08.079985  213635 cri.go:89] found id: ""
	I0414 17:47:08.080008  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.080016  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:08.080021  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:08.080071  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:08.119431  213635 cri.go:89] found id: ""
	I0414 17:47:08.119456  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.119468  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:08.119474  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:08.119529  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:08.152203  213635 cri.go:89] found id: ""
	I0414 17:47:08.152227  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.152234  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:08.152239  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:08.152287  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:08.187035  213635 cri.go:89] found id: ""
	I0414 17:47:08.187064  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.187075  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:08.187092  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:08.187106  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:08.312274  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:08.312301  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:08.312315  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:08.382714  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:08.382745  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:08.421561  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:08.421588  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:08.476855  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:08.476891  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:10.991104  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:11.004501  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:11.004575  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:11.039060  213635 cri.go:89] found id: ""
	I0414 17:47:11.039086  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.039094  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:11.039099  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:11.039145  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:11.073857  213635 cri.go:89] found id: ""
	I0414 17:47:11.073883  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.073890  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:11.073896  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:11.073942  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:11.106411  213635 cri.go:89] found id: ""
	I0414 17:47:11.106436  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.106493  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:11.106505  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:11.106550  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:11.145377  213635 cri.go:89] found id: ""
	I0414 17:47:11.145406  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.145416  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:11.145423  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:11.145481  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:11.178621  213635 cri.go:89] found id: ""
	I0414 17:47:11.178650  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.178661  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:11.178668  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:11.178731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:11.212798  213635 cri.go:89] found id: ""
	I0414 17:47:11.212832  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.212840  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:11.212846  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:11.212902  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:11.258553  213635 cri.go:89] found id: ""
	I0414 17:47:11.258576  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.258584  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:11.258589  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:11.258637  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:11.318616  213635 cri.go:89] found id: ""
	I0414 17:47:11.318658  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.318669  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:11.318680  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:11.318695  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:11.381468  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:11.381500  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:11.395975  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:11.395999  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:11.468932  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:11.468954  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:11.468971  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:11.547431  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:11.547464  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:14.089096  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:14.105644  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:14.105710  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:14.139763  213635 cri.go:89] found id: ""
	I0414 17:47:14.139791  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.139798  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:14.139804  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:14.139866  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:14.174571  213635 cri.go:89] found id: ""
	I0414 17:47:14.174594  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.174600  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:14.174605  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:14.174659  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:14.208140  213635 cri.go:89] found id: ""
	I0414 17:47:14.208164  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.208171  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:14.208177  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:14.208233  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:14.240906  213635 cri.go:89] found id: ""
	I0414 17:47:14.240940  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.240952  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:14.240959  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:14.241023  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:14.273549  213635 cri.go:89] found id: ""
	I0414 17:47:14.273581  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.273593  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:14.273599  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:14.273652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:14.308758  213635 cri.go:89] found id: ""
	I0414 17:47:14.308791  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.308798  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:14.308805  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:14.308868  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:14.343464  213635 cri.go:89] found id: ""
	I0414 17:47:14.343492  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.343503  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:14.343510  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:14.343571  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:14.377456  213635 cri.go:89] found id: ""
	I0414 17:47:14.377483  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.377493  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:14.377503  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:14.377517  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:14.428031  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:14.428059  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:14.441682  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:14.441706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:14.511433  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:14.511456  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:14.511470  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:14.591334  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:14.591373  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:17.131067  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:17.150199  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:17.150257  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:17.195868  213635 cri.go:89] found id: ""
	I0414 17:47:17.195895  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.195902  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:17.195909  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:17.195968  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:17.248530  213635 cri.go:89] found id: ""
	I0414 17:47:17.248562  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.248573  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:17.248600  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:17.248664  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:17.302561  213635 cri.go:89] found id: ""
	I0414 17:47:17.302592  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.302603  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:17.302611  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:17.302676  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:17.337154  213635 cri.go:89] found id: ""
	I0414 17:47:17.337185  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.337196  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:17.337204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:17.337262  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:17.372117  213635 cri.go:89] found id: ""
	I0414 17:47:17.372142  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.372149  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:17.372154  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:17.372209  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:17.409162  213635 cri.go:89] found id: ""
	I0414 17:47:17.409190  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.409199  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:17.409204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:17.409253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:17.444609  213635 cri.go:89] found id: ""
	I0414 17:47:17.444636  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.444652  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:17.444660  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:17.444721  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:17.484188  213635 cri.go:89] found id: ""
	I0414 17:47:17.484216  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.484226  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:17.484238  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:17.484252  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:17.523203  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:17.523249  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:17.573785  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:17.573818  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:17.586989  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:17.587014  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:17.659369  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:17.659392  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:17.659408  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:20.241973  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:20.255211  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:20.255288  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:20.292821  213635 cri.go:89] found id: ""
	I0414 17:47:20.292854  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.292866  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:20.292873  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:20.292933  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:20.331101  213635 cri.go:89] found id: ""
	I0414 17:47:20.331150  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.331162  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:20.331169  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:20.331247  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:20.369990  213635 cri.go:89] found id: ""
	I0414 17:47:20.370015  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.370022  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:20.370027  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:20.370096  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:20.406805  213635 cri.go:89] found id: ""
	I0414 17:47:20.406836  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.406846  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:20.406852  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:20.406913  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:20.442314  213635 cri.go:89] found id: ""
	I0414 17:47:20.442340  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.442348  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:20.442353  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:20.442413  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:20.476588  213635 cri.go:89] found id: ""
	I0414 17:47:20.476617  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.476627  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:20.476634  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:20.476695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:20.510731  213635 cri.go:89] found id: ""
	I0414 17:47:20.510782  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.510821  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:20.510833  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:20.510906  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:20.545219  213635 cri.go:89] found id: ""
	I0414 17:47:20.545244  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.545255  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:20.545277  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:20.545292  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:20.583147  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:20.583180  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:20.636347  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:20.636382  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:20.650452  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:20.650477  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:20.722784  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:20.722811  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:20.722828  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:23.298966  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:23.312159  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:23.312251  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:23.353883  213635 cri.go:89] found id: ""
	I0414 17:47:23.353907  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.353915  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:23.353921  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:23.354005  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:23.391644  213635 cri.go:89] found id: ""
	I0414 17:47:23.391671  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.391680  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:23.391688  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:23.391732  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:23.427612  213635 cri.go:89] found id: ""
	I0414 17:47:23.427644  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.427652  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:23.427658  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:23.427719  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:23.463296  213635 cri.go:89] found id: ""
	I0414 17:47:23.463324  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.463335  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:23.463344  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:23.463408  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:23.497377  213635 cri.go:89] found id: ""
	I0414 17:47:23.497407  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.497418  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:23.497426  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:23.497487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:23.534162  213635 cri.go:89] found id: ""
	I0414 17:47:23.534209  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.534222  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:23.534229  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:23.534299  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:23.574494  213635 cri.go:89] found id: ""
	I0414 17:47:23.574524  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.574535  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:23.574542  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:23.574611  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:23.612210  213635 cri.go:89] found id: ""
	I0414 17:47:23.612265  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.612279  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:23.612289  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:23.612304  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:23.689765  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:23.689802  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:23.725675  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:23.725709  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:23.778002  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:23.778031  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:23.793019  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:23.793052  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:23.866451  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:26.367039  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:26.381917  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:26.381987  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:26.416638  213635 cri.go:89] found id: ""
	I0414 17:47:26.416661  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.416668  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:26.416674  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:26.416721  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:26.458324  213635 cri.go:89] found id: ""
	I0414 17:47:26.458349  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.458360  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:26.458367  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:26.458423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:26.493044  213635 cri.go:89] found id: ""
	I0414 17:47:26.493096  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.493109  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:26.493116  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:26.493181  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:26.527654  213635 cri.go:89] found id: ""
	I0414 17:47:26.527690  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.527702  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:26.527709  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:26.527769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:26.565607  213635 cri.go:89] found id: ""
	I0414 17:47:26.565633  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.565639  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:26.565645  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:26.565692  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:26.598157  213635 cri.go:89] found id: ""
	I0414 17:47:26.598186  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.598196  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:26.598204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:26.598264  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:26.631534  213635 cri.go:89] found id: ""
	I0414 17:47:26.631572  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.631581  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:26.631586  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:26.631652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:26.669109  213635 cri.go:89] found id: ""
	I0414 17:47:26.669134  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.669145  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:26.669155  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:26.669169  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:26.722048  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:26.722075  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:26.735141  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:26.735160  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:26.808950  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:26.808979  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:26.808996  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:26.896662  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:26.896693  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:29.440079  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:29.454761  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:29.454837  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:29.488451  213635 cri.go:89] found id: ""
	I0414 17:47:29.488480  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.488491  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:29.488499  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:29.488548  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:29.520861  213635 cri.go:89] found id: ""
	I0414 17:47:29.520891  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.520902  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:29.520908  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:29.520963  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:29.557913  213635 cri.go:89] found id: ""
	I0414 17:47:29.557939  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.557949  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:29.557956  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:29.558013  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:29.596839  213635 cri.go:89] found id: ""
	I0414 17:47:29.596878  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.596889  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:29.596896  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:29.596959  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:29.631746  213635 cri.go:89] found id: ""
	I0414 17:47:29.631779  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.631789  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:29.631797  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:29.631864  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:29.667006  213635 cri.go:89] found id: ""
	I0414 17:47:29.667034  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.667048  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:29.667055  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:29.667111  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:29.700458  213635 cri.go:89] found id: ""
	I0414 17:47:29.700490  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.700500  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:29.700507  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:29.700569  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:29.736776  213635 cri.go:89] found id: ""
	I0414 17:47:29.736804  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.736814  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:29.736825  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:29.736840  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:29.776831  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:29.776871  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:29.830601  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:29.830632  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:29.844366  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:29.844396  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:29.920571  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:29.920595  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:29.920611  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:32.502415  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:32.516740  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:32.516806  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:32.551360  213635 cri.go:89] found id: ""
	I0414 17:47:32.551380  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.551387  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:32.551393  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:32.551440  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:32.588757  213635 cri.go:89] found id: ""
	I0414 17:47:32.588785  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.588795  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:32.588802  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:32.588869  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:32.622369  213635 cri.go:89] found id: ""
	I0414 17:47:32.622394  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.622405  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:32.622413  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:32.622473  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:32.658310  213635 cri.go:89] found id: ""
	I0414 17:47:32.658334  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.658343  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:32.658350  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:32.658408  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:32.692724  213635 cri.go:89] found id: ""
	I0414 17:47:32.692756  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.692768  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:32.692776  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:32.692836  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:32.729086  213635 cri.go:89] found id: ""
	I0414 17:47:32.729113  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.729121  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:32.729127  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:32.729182  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:32.761853  213635 cri.go:89] found id: ""
	I0414 17:47:32.761878  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.761886  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:32.761891  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:32.761937  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:32.794906  213635 cri.go:89] found id: ""
	I0414 17:47:32.794931  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.794938  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:32.794945  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:32.794956  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:32.876985  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:32.877027  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:32.915184  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:32.915210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:32.965418  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:32.965449  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:32.978245  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:32.978270  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:33.046592  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:35.547721  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:35.562729  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:35.562794  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:35.600323  213635 cri.go:89] found id: ""
	I0414 17:47:35.600353  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.600365  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:35.600374  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:35.600426  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:35.639091  213635 cri.go:89] found id: ""
	I0414 17:47:35.639116  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.639124  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:35.639130  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:35.639185  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:35.674709  213635 cri.go:89] found id: ""
	I0414 17:47:35.674743  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.674755  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:35.674763  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:35.674825  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:35.712316  213635 cri.go:89] found id: ""
	I0414 17:47:35.712340  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.712347  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:35.712353  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:35.712399  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:35.746497  213635 cri.go:89] found id: ""
	I0414 17:47:35.746525  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.746535  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:35.746542  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:35.746611  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:35.787414  213635 cri.go:89] found id: ""
	I0414 17:47:35.787436  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.787445  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:35.787460  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:35.787514  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:35.818830  213635 cri.go:89] found id: ""
	I0414 17:47:35.818857  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.818867  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:35.818874  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:35.818938  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:35.854020  213635 cri.go:89] found id: ""
	I0414 17:47:35.854048  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.854059  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:35.854082  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:35.854095  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:35.907502  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:35.907530  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:35.922223  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:35.922248  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:35.992058  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:35.992085  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:35.992101  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:36.070377  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:36.070413  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:38.612483  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:38.625570  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:38.625639  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:38.664060  213635 cri.go:89] found id: ""
	I0414 17:47:38.664084  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.664104  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:38.664112  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:38.664168  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:38.698505  213635 cri.go:89] found id: ""
	I0414 17:47:38.698535  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.698546  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:38.698553  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:38.698614  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:38.735113  213635 cri.go:89] found id: ""
	I0414 17:47:38.735142  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.735153  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:38.735161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:38.735229  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:38.773173  213635 cri.go:89] found id: ""
	I0414 17:47:38.773203  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.773211  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:38.773216  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:38.773270  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:38.807136  213635 cri.go:89] found id: ""
	I0414 17:47:38.807167  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.807178  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:38.807186  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:38.807244  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:38.844350  213635 cri.go:89] found id: ""
	I0414 17:47:38.844375  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.844384  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:38.844392  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:38.844445  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:38.879565  213635 cri.go:89] found id: ""
	I0414 17:47:38.879587  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.879594  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:38.879599  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:38.879658  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:38.916412  213635 cri.go:89] found id: ""
	I0414 17:47:38.916449  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.916457  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:38.916465  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:38.916475  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:38.953944  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:38.953972  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:39.004989  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:39.005019  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:39.018618  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:39.018640  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:39.091095  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:39.091122  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:39.091136  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:41.675012  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:41.689023  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:41.689085  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:41.722675  213635 cri.go:89] found id: ""
	I0414 17:47:41.722698  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.722707  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:41.722715  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:41.722774  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:41.757787  213635 cri.go:89] found id: ""
	I0414 17:47:41.757808  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.757815  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:41.757822  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:41.757895  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:41.792938  213635 cri.go:89] found id: ""
	I0414 17:47:41.792970  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.792981  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:41.792990  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:41.793060  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:41.826121  213635 cri.go:89] found id: ""
	I0414 17:47:41.826145  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.826153  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:41.826158  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:41.826206  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:41.862687  213635 cri.go:89] found id: ""
	I0414 17:47:41.862717  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.862728  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:41.862735  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:41.862810  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:41.901905  213635 cri.go:89] found id: ""
	I0414 17:47:41.901935  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.901945  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:41.901953  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:41.902010  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:41.936560  213635 cri.go:89] found id: ""
	I0414 17:47:41.936591  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.936602  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:41.936609  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:41.936673  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:41.968609  213635 cri.go:89] found id: ""
	I0414 17:47:41.968640  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.968651  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:41.968663  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:41.968677  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:42.037691  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:42.037725  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:42.037742  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:42.123173  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:42.123222  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:42.164982  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:42.165018  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:42.217567  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:42.217601  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:44.733645  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:44.748083  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:44.748144  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:44.782103  213635 cri.go:89] found id: ""
	I0414 17:47:44.782131  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.782141  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:44.782148  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:44.782200  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:44.825594  213635 cri.go:89] found id: ""
	I0414 17:47:44.825640  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.825652  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:44.825659  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:44.825719  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:44.858967  213635 cri.go:89] found id: ""
	I0414 17:47:44.859000  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.859017  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:44.859024  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:44.859088  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:44.892965  213635 cri.go:89] found id: ""
	I0414 17:47:44.892990  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.892999  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:44.893007  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:44.893073  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:44.926983  213635 cri.go:89] found id: ""
	I0414 17:47:44.927007  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.927014  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:44.927019  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:44.927066  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:44.961406  213635 cri.go:89] found id: ""
	I0414 17:47:44.961459  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.961471  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:44.961478  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:44.961540  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:44.996262  213635 cri.go:89] found id: ""
	I0414 17:47:44.996287  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.996296  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:44.996304  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:44.996368  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:45.029476  213635 cri.go:89] found id: ""
	I0414 17:47:45.029507  213635 logs.go:282] 0 containers: []
	W0414 17:47:45.029518  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:45.029529  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:45.029543  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:45.100081  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:45.100110  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:45.100122  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:45.179286  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:45.179319  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:45.220129  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:45.220166  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:45.275257  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:45.275292  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:47.792170  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:47.805709  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:47.805769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:47.842023  213635 cri.go:89] found id: ""
	I0414 17:47:47.842050  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.842058  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:47.842063  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:47.842118  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:47.884228  213635 cri.go:89] found id: ""
	I0414 17:47:47.884260  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.884271  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:47.884278  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:47.884338  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:47.924093  213635 cri.go:89] found id: ""
	I0414 17:47:47.924121  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.924130  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:47.924137  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:47.924193  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:47.965378  213635 cri.go:89] found id: ""
	I0414 17:47:47.965406  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.965416  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:47.965423  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:47.965538  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:48.003136  213635 cri.go:89] found id: ""
	I0414 17:47:48.003165  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.003178  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:48.003187  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:48.003253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:48.042729  213635 cri.go:89] found id: ""
	I0414 17:47:48.042758  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.042768  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:48.042774  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:48.042837  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:48.077654  213635 cri.go:89] found id: ""
	I0414 17:47:48.077682  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.077692  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:48.077699  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:48.077749  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:48.109967  213635 cri.go:89] found id: ""
	I0414 17:47:48.109991  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.109998  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:48.110006  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:48.110017  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:48.125245  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:48.125277  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:48.194705  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:48.194725  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:48.194738  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:48.287160  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:48.287196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:48.335515  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:48.335547  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:50.892108  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:50.905172  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:50.905234  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:50.940079  213635 cri.go:89] found id: ""
	I0414 17:47:50.940104  213635 logs.go:282] 0 containers: []
	W0414 17:47:50.940111  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:50.940116  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:50.940176  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:50.973887  213635 cri.go:89] found id: ""
	I0414 17:47:50.973912  213635 logs.go:282] 0 containers: []
	W0414 17:47:50.973919  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:50.973926  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:50.973982  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:51.012547  213635 cri.go:89] found id: ""
	I0414 17:47:51.012568  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.012577  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:51.012584  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:51.012640  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:51.053157  213635 cri.go:89] found id: ""
	I0414 17:47:51.053180  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.053188  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:51.053196  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:51.053249  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:51.110289  213635 cri.go:89] found id: ""
	I0414 17:47:51.110319  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.110330  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:51.110337  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:51.110393  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:51.144361  213635 cri.go:89] found id: ""
	I0414 17:47:51.144383  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.144394  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:51.144402  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:51.144530  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:51.177527  213635 cri.go:89] found id: ""
	I0414 17:47:51.177563  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.177571  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:51.177576  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:51.177636  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:51.210869  213635 cri.go:89] found id: ""
	I0414 17:47:51.210891  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.210899  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:51.210907  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:51.210918  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:51.247291  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:51.247317  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:51.299677  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:51.299706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:51.313384  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:51.313409  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:51.388212  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:51.388239  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:51.388254  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:53.976114  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:53.989051  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:53.989115  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:54.023756  213635 cri.go:89] found id: ""
	I0414 17:47:54.023788  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.023799  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:54.023805  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:54.023869  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:54.061807  213635 cri.go:89] found id: ""
	I0414 17:47:54.061853  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.061865  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:54.061872  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:54.061930  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:54.095835  213635 cri.go:89] found id: ""
	I0414 17:47:54.095878  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.095890  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:54.095897  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:54.096006  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:54.131513  213635 cri.go:89] found id: ""
	I0414 17:47:54.131535  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.131543  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:54.131548  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:54.131594  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:54.171002  213635 cri.go:89] found id: ""
	I0414 17:47:54.171024  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.171031  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:54.171037  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:54.171095  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:54.206779  213635 cri.go:89] found id: ""
	I0414 17:47:54.206801  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.206808  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:54.206818  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:54.206876  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:54.252485  213635 cri.go:89] found id: ""
	I0414 17:47:54.252533  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.252547  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:54.252555  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:54.252628  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:54.290628  213635 cri.go:89] found id: ""
	I0414 17:47:54.290656  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.290667  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:54.290676  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:54.290689  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:54.364000  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:54.364020  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:54.364032  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:54.446117  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:54.446152  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:54.488749  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:54.488775  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:54.540890  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:54.540922  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:57.055546  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:57.069362  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:57.069420  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:57.112914  213635 cri.go:89] found id: ""
	I0414 17:47:57.112942  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.112949  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:57.112955  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:57.113002  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:57.149533  213635 cri.go:89] found id: ""
	I0414 17:47:57.149553  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.149560  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:57.149565  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:57.149622  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:57.184595  213635 cri.go:89] found id: ""
	I0414 17:47:57.184624  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.184632  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:57.184637  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:57.184683  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:57.219904  213635 cri.go:89] found id: ""
	I0414 17:47:57.219931  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.219942  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:57.219949  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:57.220008  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:57.255709  213635 cri.go:89] found id: ""
	I0414 17:47:57.255736  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.255745  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:57.255750  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:57.255809  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:57.289390  213635 cri.go:89] found id: ""
	I0414 17:47:57.289413  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.289419  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:57.289425  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:57.289474  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:57.329950  213635 cri.go:89] found id: ""
	I0414 17:47:57.329972  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.329978  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:57.329983  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:57.330028  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:57.365856  213635 cri.go:89] found id: ""
	I0414 17:47:57.365888  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.365901  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:57.365911  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:57.365925  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:57.378637  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:57.378661  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:57.446639  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:57.446662  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:57.446676  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:57.536049  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:57.536086  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:57.585473  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:57.585506  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:00.135711  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:00.151060  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:00.151131  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:00.184972  213635 cri.go:89] found id: ""
	I0414 17:48:00.185005  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.185016  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:00.185023  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:00.185088  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:00.218051  213635 cri.go:89] found id: ""
	I0414 17:48:00.218085  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.218093  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:00.218099  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:00.218156  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:00.251291  213635 cri.go:89] found id: ""
	I0414 17:48:00.251318  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.251325  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:00.251331  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:00.251392  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:00.291683  213635 cri.go:89] found id: ""
	I0414 17:48:00.291706  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.291713  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:00.291718  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:00.291765  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:00.329316  213635 cri.go:89] found id: ""
	I0414 17:48:00.329342  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.329350  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:00.329356  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:00.329409  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:00.364819  213635 cri.go:89] found id: ""
	I0414 17:48:00.364848  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.364856  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:00.364861  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:00.364905  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:00.404928  213635 cri.go:89] found id: ""
	I0414 17:48:00.404961  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.404971  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:00.404978  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:00.405040  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:00.439708  213635 cri.go:89] found id: ""
	I0414 17:48:00.439739  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.439750  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:00.439761  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:00.439776  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:00.479252  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:00.479285  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:00.533545  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:00.533576  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:00.546920  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:00.546952  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:00.614440  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:00.614461  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:00.614476  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:03.197930  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:03.212912  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:03.212973  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:03.272435  213635 cri.go:89] found id: ""
	I0414 17:48:03.272467  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.272479  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:03.272487  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:03.272554  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:03.336351  213635 cri.go:89] found id: ""
	I0414 17:48:03.336373  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.336380  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:03.336386  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:03.336430  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:03.370368  213635 cri.go:89] found id: ""
	I0414 17:48:03.370398  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.370408  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:03.370422  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:03.370475  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:03.408402  213635 cri.go:89] found id: ""
	I0414 17:48:03.408429  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.408436  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:03.408442  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:03.408491  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:03.442912  213635 cri.go:89] found id: ""
	I0414 17:48:03.442939  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.442950  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:03.442957  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:03.443019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:03.479439  213635 cri.go:89] found id: ""
	I0414 17:48:03.479467  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.479476  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:03.479481  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:03.479544  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:03.517971  213635 cri.go:89] found id: ""
	I0414 17:48:03.517993  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.518000  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:03.518005  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:03.518059  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:03.556177  213635 cri.go:89] found id: ""
	I0414 17:48:03.556208  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.556216  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:03.556224  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:03.556237  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:03.594142  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:03.594167  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:03.644688  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:03.644718  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:03.658140  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:03.658164  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:03.729627  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:03.729649  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:03.729663  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:06.309939  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:06.323927  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:06.323990  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:06.364388  213635 cri.go:89] found id: ""
	I0414 17:48:06.364412  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.364426  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:06.364431  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:06.364477  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:06.398800  213635 cri.go:89] found id: ""
	I0414 17:48:06.398821  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.398828  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:06.398833  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:06.398885  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:06.442842  213635 cri.go:89] found id: ""
	I0414 17:48:06.442873  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.442884  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:06.442891  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:06.442973  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:06.485910  213635 cri.go:89] found id: ""
	I0414 17:48:06.485945  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.485955  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:06.485962  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:06.486023  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:06.520624  213635 cri.go:89] found id: ""
	I0414 17:48:06.520656  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.520668  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:06.520675  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:06.520741  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:06.555790  213635 cri.go:89] found id: ""
	I0414 17:48:06.555833  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.555845  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:06.555853  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:06.555916  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:06.589144  213635 cri.go:89] found id: ""
	I0414 17:48:06.589166  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.589173  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:06.589177  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:06.589223  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:06.623771  213635 cri.go:89] found id: ""
	I0414 17:48:06.623808  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.623824  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:06.623833  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:06.623843  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:06.679003  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:06.679039  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:06.695303  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:06.695328  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:06.770562  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:06.770585  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:06.770597  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:06.850617  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:06.850652  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:09.390500  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:09.403827  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:09.403885  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:09.438395  213635 cri.go:89] found id: ""
	I0414 17:48:09.438420  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.438428  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:09.438434  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:09.438484  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:09.473071  213635 cri.go:89] found id: ""
	I0414 17:48:09.473098  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.473106  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:09.473112  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:09.473159  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:09.506175  213635 cri.go:89] found id: ""
	I0414 17:48:09.506205  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.506216  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:09.506223  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:09.506272  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:09.540488  213635 cri.go:89] found id: ""
	I0414 17:48:09.540511  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.540518  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:09.540523  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:09.540583  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:09.576189  213635 cri.go:89] found id: ""
	I0414 17:48:09.576222  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.576233  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:09.576241  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:09.576302  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:09.607908  213635 cri.go:89] found id: ""
	I0414 17:48:09.607937  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.607945  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:09.607950  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:09.608000  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:09.642069  213635 cri.go:89] found id: ""
	I0414 17:48:09.642098  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.642108  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:09.642115  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:09.642177  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:09.675434  213635 cri.go:89] found id: ""
	I0414 17:48:09.675463  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.675474  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:09.675484  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:09.675496  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:09.754118  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:09.754154  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:09.797336  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:09.797373  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:09.849366  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:09.849407  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:09.863427  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:09.863458  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:09.934735  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:12.435482  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:12.449310  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:12.449374  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:12.484115  213635 cri.go:89] found id: ""
	I0414 17:48:12.484143  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.484153  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:12.484160  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:12.484213  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:12.521972  213635 cri.go:89] found id: ""
	I0414 17:48:12.521994  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.522001  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:12.522012  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:12.522071  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:12.554192  213635 cri.go:89] found id: ""
	I0414 17:48:12.554219  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.554229  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:12.554237  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:12.554296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:12.587420  213635 cri.go:89] found id: ""
	I0414 17:48:12.587450  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.587460  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:12.587467  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:12.587526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:12.621562  213635 cri.go:89] found id: ""
	I0414 17:48:12.621588  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.621599  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:12.621608  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:12.621672  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:12.660123  213635 cri.go:89] found id: ""
	I0414 17:48:12.660147  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.660155  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:12.660160  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:12.660216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:12.693979  213635 cri.go:89] found id: ""
	I0414 17:48:12.694010  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.694021  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:12.694029  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:12.694097  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:12.728017  213635 cri.go:89] found id: ""
	I0414 17:48:12.728043  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.728051  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:12.728060  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:12.728072  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:12.782896  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:12.782927  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:12.795655  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:12.795679  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:12.865150  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:12.865183  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:12.865197  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:12.950645  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:12.950682  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:15.490793  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:15.504867  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:15.504941  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:15.538968  213635 cri.go:89] found id: ""
	I0414 17:48:15.538990  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.538998  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:15.539003  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:15.539049  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:15.573937  213635 cri.go:89] found id: ""
	I0414 17:48:15.573961  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.573968  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:15.573973  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:15.574019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:15.609320  213635 cri.go:89] found id: ""
	I0414 17:48:15.609346  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.609360  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:15.609367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:15.609425  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:15.641598  213635 cri.go:89] found id: ""
	I0414 17:48:15.641626  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.641635  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:15.641641  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:15.641695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:15.675213  213635 cri.go:89] found id: ""
	I0414 17:48:15.675239  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.675248  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:15.675255  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:15.675313  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:15.710542  213635 cri.go:89] found id: ""
	I0414 17:48:15.710565  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.710572  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:15.710578  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:15.710623  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:15.745699  213635 cri.go:89] found id: ""
	I0414 17:48:15.745724  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.745735  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:15.745742  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:15.745792  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:15.782559  213635 cri.go:89] found id: ""
	I0414 17:48:15.782586  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.782596  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:15.782605  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:15.782619  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:15.837926  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:15.837964  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:15.854293  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:15.854333  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:15.944741  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:15.944761  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:15.944773  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:16.032687  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:16.032716  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:18.574911  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:18.589009  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:18.589060  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:18.625705  213635 cri.go:89] found id: ""
	I0414 17:48:18.625730  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.625738  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:18.625743  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:18.625796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:18.659670  213635 cri.go:89] found id: ""
	I0414 17:48:18.659704  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.659713  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:18.659719  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:18.659762  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:18.694973  213635 cri.go:89] found id: ""
	I0414 17:48:18.694997  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.695005  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:18.695011  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:18.695083  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:18.733777  213635 cri.go:89] found id: ""
	I0414 17:48:18.733801  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.733808  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:18.733813  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:18.733881  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:18.765747  213635 cri.go:89] found id: ""
	I0414 17:48:18.765768  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.765775  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:18.765780  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:18.765856  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:18.799558  213635 cri.go:89] found id: ""
	I0414 17:48:18.799585  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.799595  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:18.799601  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:18.799653  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:18.835245  213635 cri.go:89] found id: ""
	I0414 17:48:18.835279  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.835291  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:18.835300  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:18.835354  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:18.870176  213635 cri.go:89] found id: ""
	I0414 17:48:18.870201  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.870212  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:18.870222  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:18.870236  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:18.883166  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:18.883195  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:18.946103  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:18.946128  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:18.946145  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:19.023462  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:19.023496  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:19.067254  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:19.067281  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:21.619412  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:21.635163  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:21.635233  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:21.671680  213635 cri.go:89] found id: ""
	I0414 17:48:21.671705  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.671713  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:21.671719  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:21.671767  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:21.709955  213635 cri.go:89] found id: ""
	I0414 17:48:21.709987  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.709998  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:21.710005  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:21.710064  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:21.743179  213635 cri.go:89] found id: ""
	I0414 17:48:21.743202  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.743209  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:21.743214  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:21.743267  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:21.775835  213635 cri.go:89] found id: ""
	I0414 17:48:21.775862  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.775870  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:21.775875  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:21.775920  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:21.810164  213635 cri.go:89] found id: ""
	I0414 17:48:21.810190  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.810201  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:21.810207  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:21.810253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:21.848616  213635 cri.go:89] found id: ""
	I0414 17:48:21.848639  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.848646  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:21.848651  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:21.848717  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:21.887985  213635 cri.go:89] found id: ""
	I0414 17:48:21.888014  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.888024  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:21.888030  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:21.888076  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:21.927965  213635 cri.go:89] found id: ""
	I0414 17:48:21.927992  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.928003  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:21.928013  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:21.928028  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:21.989253  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:21.989294  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:22.003399  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:22.003429  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:22.071849  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:22.071872  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:22.071889  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:22.149857  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:22.149888  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:24.691531  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:24.706169  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:24.706230  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:24.745747  213635 cri.go:89] found id: ""
	I0414 17:48:24.745780  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.745791  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:24.745799  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:24.745886  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:24.785261  213635 cri.go:89] found id: ""
	I0414 17:48:24.785284  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.785291  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:24.785296  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:24.785351  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:24.824491  213635 cri.go:89] found id: ""
	I0414 17:48:24.824525  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.824536  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:24.824550  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:24.824606  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:24.868655  213635 cri.go:89] found id: ""
	I0414 17:48:24.868683  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.868696  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:24.868704  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:24.868769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:24.910959  213635 cri.go:89] found id: ""
	I0414 17:48:24.910982  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.910989  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:24.910995  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:24.911053  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:24.944036  213635 cri.go:89] found id: ""
	I0414 17:48:24.944065  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.944073  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:24.944078  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:24.944127  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:24.977481  213635 cri.go:89] found id: ""
	I0414 17:48:24.977512  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.977522  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:24.977529  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:24.977589  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:25.010063  213635 cri.go:89] found id: ""
	I0414 17:48:25.010087  213635 logs.go:282] 0 containers: []
	W0414 17:48:25.010094  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:25.010103  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:25.010114  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:25.062645  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:25.062680  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:25.077120  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:25.077144  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:25.151533  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:25.151553  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:25.151565  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:25.230945  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:25.230985  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:27.774758  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:27.789640  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:27.789692  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:27.822128  213635 cri.go:89] found id: ""
	I0414 17:48:27.822162  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.822169  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:27.822175  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:27.822227  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:27.858364  213635 cri.go:89] found id: ""
	I0414 17:48:27.858394  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.858401  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:27.858406  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:27.858452  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:27.893587  213635 cri.go:89] found id: ""
	I0414 17:48:27.893618  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.893628  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:27.893636  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:27.893695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:27.930766  213635 cri.go:89] found id: ""
	I0414 17:48:27.930799  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.930810  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:27.930817  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:27.930879  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:27.962936  213635 cri.go:89] found id: ""
	I0414 17:48:27.962966  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.962977  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:27.962983  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:27.963036  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:27.999471  213635 cri.go:89] found id: ""
	I0414 17:48:27.999503  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.999511  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:27.999517  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:27.999575  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:28.030604  213635 cri.go:89] found id: ""
	I0414 17:48:28.030636  213635 logs.go:282] 0 containers: []
	W0414 17:48:28.030645  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:28.030650  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:28.030704  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:28.066407  213635 cri.go:89] found id: ""
	I0414 17:48:28.066436  213635 logs.go:282] 0 containers: []
	W0414 17:48:28.066446  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:28.066457  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:28.066471  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:28.118182  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:28.118210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:28.131007  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:28.131031  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:28.198468  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:28.198488  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:28.198500  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:28.286352  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:28.286387  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:30.826694  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:30.839877  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:30.839949  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:30.873980  213635 cri.go:89] found id: ""
	I0414 17:48:30.874010  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.874021  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:30.874028  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:30.874087  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:30.909567  213635 cri.go:89] found id: ""
	I0414 17:48:30.909593  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.909600  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:30.909606  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:30.909661  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:30.943382  213635 cri.go:89] found id: ""
	I0414 17:48:30.943414  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.943424  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:30.943431  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:30.943487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:30.976444  213635 cri.go:89] found id: ""
	I0414 17:48:30.976477  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.976488  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:30.976496  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:30.976555  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:31.010623  213635 cri.go:89] found id: ""
	I0414 17:48:31.010651  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.010662  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:31.010669  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:31.010727  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:31.049542  213635 cri.go:89] found id: ""
	I0414 17:48:31.049568  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.049578  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:31.049585  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:31.049646  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:31.082301  213635 cri.go:89] found id: ""
	I0414 17:48:31.082326  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.082336  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:31.082343  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:31.082403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:31.115742  213635 cri.go:89] found id: ""
	I0414 17:48:31.115768  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.115776  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:31.115784  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:31.115794  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:31.167568  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:31.167598  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:31.180202  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:31.180229  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:31.247958  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:31.247980  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:31.247995  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:31.337341  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:31.337379  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:33.892139  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:33.905803  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:33.905884  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:33.945429  213635 cri.go:89] found id: ""
	I0414 17:48:33.945458  213635 logs.go:282] 0 containers: []
	W0414 17:48:33.945468  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:33.945476  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:33.945524  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:33.978018  213635 cri.go:89] found id: ""
	I0414 17:48:33.978047  213635 logs.go:282] 0 containers: []
	W0414 17:48:33.978056  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:33.978063  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:33.978113  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:34.013902  213635 cri.go:89] found id: ""
	I0414 17:48:34.013926  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.013934  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:34.013940  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:34.013986  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:34.052308  213635 cri.go:89] found id: ""
	I0414 17:48:34.052340  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.052351  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:34.052358  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:34.052423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:34.092541  213635 cri.go:89] found id: ""
	I0414 17:48:34.092565  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.092572  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:34.092577  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:34.092638  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:34.126690  213635 cri.go:89] found id: ""
	I0414 17:48:34.126725  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.126736  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:34.126745  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:34.126810  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:34.161043  213635 cri.go:89] found id: ""
	I0414 17:48:34.161072  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.161081  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:34.161087  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:34.161148  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:34.195793  213635 cri.go:89] found id: ""
	I0414 17:48:34.195817  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.195825  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:34.195835  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:34.195847  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:34.238858  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:34.238890  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:34.294092  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:34.294122  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:34.310473  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:34.310510  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:34.377489  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:34.377517  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:34.377535  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:36.963220  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:36.976594  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:36.976663  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:37.009685  213635 cri.go:89] found id: ""
	I0414 17:48:37.009710  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.009720  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:37.009727  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:37.009780  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:37.044805  213635 cri.go:89] found id: ""
	I0414 17:48:37.044832  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.044845  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:37.044852  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:37.044915  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:37.096059  213635 cri.go:89] found id: ""
	I0414 17:48:37.096082  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.096089  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:37.096094  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:37.096146  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:37.132630  213635 cri.go:89] found id: ""
	I0414 17:48:37.132654  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.132664  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:37.132670  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:37.132731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:37.168840  213635 cri.go:89] found id: ""
	I0414 17:48:37.168865  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.168874  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:37.168881  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:37.168940  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:37.202226  213635 cri.go:89] found id: ""
	I0414 17:48:37.202250  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.202258  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:37.202264  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:37.202321  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:37.236649  213635 cri.go:89] found id: ""
	I0414 17:48:37.236677  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.236687  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:37.236695  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:37.236758  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:37.270393  213635 cri.go:89] found id: ""
	I0414 17:48:37.270417  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.270427  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:37.270438  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:37.270454  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:37.320463  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:37.320492  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:37.334355  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:37.334388  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:37.402650  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:37.402674  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:37.402686  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:37.479961  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:37.479999  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:40.024993  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:40.038522  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:40.038578  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:40.075237  213635 cri.go:89] found id: ""
	I0414 17:48:40.075264  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.075274  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:40.075282  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:40.075342  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:40.117027  213635 cri.go:89] found id: ""
	I0414 17:48:40.117052  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.117059  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:40.117065  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:40.117130  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:40.150149  213635 cri.go:89] found id: ""
	I0414 17:48:40.150181  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.150193  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:40.150201  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:40.150265  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:40.185087  213635 cri.go:89] found id: ""
	I0414 17:48:40.185114  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.185122  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:40.185128  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:40.185179  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:40.219050  213635 cri.go:89] found id: ""
	I0414 17:48:40.219077  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.219084  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:40.219090  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:40.219137  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:40.252681  213635 cri.go:89] found id: ""
	I0414 17:48:40.252712  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.252723  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:40.252731  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:40.252796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:40.289524  213635 cri.go:89] found id: ""
	I0414 17:48:40.289551  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.289559  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:40.289564  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:40.289622  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:40.322952  213635 cri.go:89] found id: ""
	I0414 17:48:40.322986  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.322998  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:40.323009  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:40.323023  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:40.375012  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:40.375046  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:40.389868  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:40.389900  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:40.456829  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:40.456849  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:40.456861  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:40.537720  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:40.537759  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:43.079573  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:43.092754  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:43.092808  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:43.128097  213635 cri.go:89] found id: ""
	I0414 17:48:43.128131  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.128142  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:43.128150  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:43.128210  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:43.161361  213635 cri.go:89] found id: ""
	I0414 17:48:43.161391  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.161403  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:43.161410  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:43.161470  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:43.196698  213635 cri.go:89] found id: ""
	I0414 17:48:43.196780  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.196796  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:43.196807  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:43.196870  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:43.230687  213635 cri.go:89] found id: ""
	I0414 17:48:43.230717  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.230724  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:43.230729  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:43.230790  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:43.272118  213635 cri.go:89] found id: ""
	I0414 17:48:43.272143  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.272149  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:43.272155  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:43.272212  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:43.305507  213635 cri.go:89] found id: ""
	I0414 17:48:43.305544  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.305557  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:43.305567  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:43.305667  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:43.342294  213635 cri.go:89] found id: ""
	I0414 17:48:43.342328  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.342339  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:43.342346  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:43.342403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:43.374476  213635 cri.go:89] found id: ""
	I0414 17:48:43.374502  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.374510  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:43.374519  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:43.374529  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:43.429817  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:43.429869  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:43.446168  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:43.446205  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:43.562603  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:43.562629  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:43.562647  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:43.647833  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:43.647873  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:46.192567  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:46.205502  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:46.205572  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:46.241592  213635 cri.go:89] found id: ""
	I0414 17:48:46.241618  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.241628  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:46.241635  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:46.241698  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:46.276977  213635 cri.go:89] found id: ""
	I0414 17:48:46.277004  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.277014  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:46.277020  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:46.277079  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:46.312906  213635 cri.go:89] found id: ""
	I0414 17:48:46.312930  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.312939  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:46.312946  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:46.313007  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:46.346994  213635 cri.go:89] found id: ""
	I0414 17:48:46.347018  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.347026  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:46.347031  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:46.347077  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:46.380069  213635 cri.go:89] found id: ""
	I0414 17:48:46.380093  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.380104  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:46.380111  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:46.380172  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:46.416546  213635 cri.go:89] found id: ""
	I0414 17:48:46.416574  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.416584  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:46.416592  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:46.416652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:46.453343  213635 cri.go:89] found id: ""
	I0414 17:48:46.453374  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.453386  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:46.453393  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:46.453447  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:46.490450  213635 cri.go:89] found id: ""
	I0414 17:48:46.490479  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.490489  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:46.490499  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:46.490513  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:46.551507  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:46.551542  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:46.565243  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:46.565272  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:46.636609  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:46.636634  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:46.636651  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:46.715829  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:46.715872  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:49.255006  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:49.277839  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:49.277915  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:49.340015  213635 cri.go:89] found id: ""
	I0414 17:48:49.340051  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.340063  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:49.340071  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:49.340143  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:49.375879  213635 cri.go:89] found id: ""
	I0414 17:48:49.375907  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.375917  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:49.375924  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:49.375987  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:49.408770  213635 cri.go:89] found id: ""
	I0414 17:48:49.408796  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.408806  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:49.408813  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:49.408877  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:49.446644  213635 cri.go:89] found id: ""
	I0414 17:48:49.446673  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.446682  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:49.446690  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:49.446758  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:49.486858  213635 cri.go:89] found id: ""
	I0414 17:48:49.486887  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.486897  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:49.486904  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:49.486964  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:49.525400  213635 cri.go:89] found id: ""
	I0414 17:48:49.525427  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.525437  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:49.525445  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:49.525507  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:49.559553  213635 cri.go:89] found id: ""
	I0414 17:48:49.559578  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.559587  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:49.559595  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:49.559656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:49.591090  213635 cri.go:89] found id: ""
	I0414 17:48:49.591123  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.591131  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:49.591144  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:49.591155  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:49.643807  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:49.643841  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:49.657066  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:49.657090  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:49.729359  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:49.729388  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:49.729404  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:49.808543  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:49.808573  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:52.348426  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:52.366010  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:52.366076  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:52.404950  213635 cri.go:89] found id: ""
	I0414 17:48:52.404976  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.404985  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:52.404991  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:52.405046  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:52.445893  213635 cri.go:89] found id: ""
	I0414 17:48:52.445927  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.445937  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:52.445945  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:52.446011  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:52.479635  213635 cri.go:89] found id: ""
	I0414 17:48:52.479657  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.479664  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:52.479671  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:52.479738  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:52.523616  213635 cri.go:89] found id: ""
	I0414 17:48:52.523650  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.523661  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:52.523669  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:52.523730  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:52.571706  213635 cri.go:89] found id: ""
	I0414 17:48:52.571739  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.571751  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:52.571758  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:52.571826  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:52.616799  213635 cri.go:89] found id: ""
	I0414 17:48:52.616822  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.616831  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:52.616836  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:52.616901  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:52.652373  213635 cri.go:89] found id: ""
	I0414 17:48:52.652402  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.652413  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:52.652420  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:52.652481  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:52.689582  213635 cri.go:89] found id: ""
	I0414 17:48:52.689614  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.689626  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:52.689637  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:52.689651  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:52.741215  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:52.741254  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:52.757324  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:52.757361  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:52.828589  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:52.828609  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:52.828621  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:52.918483  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:52.918527  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:55.461925  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:55.475396  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:55.475472  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:55.511338  213635 cri.go:89] found id: ""
	I0414 17:48:55.511366  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.511374  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:55.511381  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:55.511444  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:55.547324  213635 cri.go:89] found id: ""
	I0414 17:48:55.547348  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.547355  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:55.547366  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:55.547423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:55.593274  213635 cri.go:89] found id: ""
	I0414 17:48:55.593303  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.593314  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:55.593322  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:55.593386  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:55.628013  213635 cri.go:89] found id: ""
	I0414 17:48:55.628042  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.628053  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:55.628060  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:55.628127  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:55.663752  213635 cri.go:89] found id: ""
	I0414 17:48:55.663786  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.663798  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:55.663805  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:55.663867  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:55.700578  213635 cri.go:89] found id: ""
	I0414 17:48:55.700601  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.700609  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:55.700614  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:55.700661  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:55.733772  213635 cri.go:89] found id: ""
	I0414 17:48:55.733797  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.733805  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:55.733811  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:55.733891  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:55.769135  213635 cri.go:89] found id: ""
	I0414 17:48:55.769161  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.769174  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:55.769184  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:55.769196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:55.810526  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:55.810560  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:55.863132  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:55.863166  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:55.879346  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:55.879381  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:55.961385  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:55.961403  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:55.961418  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:58.566639  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:58.580841  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:58.580906  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:58.620613  213635 cri.go:89] found id: ""
	I0414 17:48:58.620647  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.620659  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:58.620668  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:58.620736  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:58.661513  213635 cri.go:89] found id: ""
	I0414 17:48:58.661549  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.661559  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:58.661567  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:58.661637  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:58.710480  213635 cri.go:89] found id: ""
	I0414 17:48:58.710512  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.710524  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:58.710531  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:58.710594  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:58.755300  213635 cri.go:89] found id: ""
	I0414 17:48:58.755328  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.755339  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:58.755346  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:58.755403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:58.791364  213635 cri.go:89] found id: ""
	I0414 17:48:58.791396  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.791416  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:58.791424  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:58.791490  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:58.830571  213635 cri.go:89] found id: ""
	I0414 17:48:58.830598  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.830610  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:58.830617  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:58.830677  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:58.864897  213635 cri.go:89] found id: ""
	I0414 17:48:58.864924  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.864933  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:58.864940  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:58.865000  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:58.900362  213635 cri.go:89] found id: ""
	I0414 17:48:58.900393  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.900403  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:58.900414  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:58.900431  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:58.953300  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:58.953340  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:58.974592  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:58.974634  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:59.054206  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:59.054234  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:59.054251  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:59.137354  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:59.137390  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:01.684252  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:01.702697  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:01.702776  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:01.746204  213635 cri.go:89] found id: ""
	I0414 17:49:01.746232  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.746276  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:01.746284  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:01.746347  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:01.784544  213635 cri.go:89] found id: ""
	I0414 17:49:01.784574  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.784584  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:01.784591  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:01.784649  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:01.821353  213635 cri.go:89] found id: ""
	I0414 17:49:01.821382  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.821392  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:01.821399  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:01.821454  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:01.855681  213635 cri.go:89] found id: ""
	I0414 17:49:01.855707  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.855715  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:01.855723  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:01.855783  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:01.891114  213635 cri.go:89] found id: ""
	I0414 17:49:01.891142  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.891153  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:01.891161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:01.891230  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:01.926536  213635 cri.go:89] found id: ""
	I0414 17:49:01.926570  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.926581  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:01.926588  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:01.926648  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:01.971430  213635 cri.go:89] found id: ""
	I0414 17:49:01.971455  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.971462  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:01.971468  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:01.971513  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:02.010416  213635 cri.go:89] found id: ""
	I0414 17:49:02.010444  213635 logs.go:282] 0 containers: []
	W0414 17:49:02.010452  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:02.010461  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:02.010476  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:02.093422  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:02.093451  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:02.093468  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:02.175219  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:02.175256  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:02.216929  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:02.216957  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:02.269151  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:02.269188  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:04.787535  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:04.801528  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:04.801604  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:04.838408  213635 cri.go:89] found id: ""
	I0414 17:49:04.838442  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.838458  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:04.838466  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:04.838529  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:04.888614  213635 cri.go:89] found id: ""
	I0414 17:49:04.888645  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.888658  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:04.888667  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:04.888720  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:04.931279  213635 cri.go:89] found id: ""
	I0414 17:49:04.931307  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.931317  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:04.931325  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:04.931461  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:04.970024  213635 cri.go:89] found id: ""
	I0414 17:49:04.970052  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.970061  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:04.970069  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:04.970138  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:05.012914  213635 cri.go:89] found id: ""
	I0414 17:49:05.012938  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.012958  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:05.012967  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:05.013027  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:05.050788  213635 cri.go:89] found id: ""
	I0414 17:49:05.050811  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.050834  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:05.050842  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:05.050905  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:05.090988  213635 cri.go:89] found id: ""
	I0414 17:49:05.091017  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.091028  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:05.091036  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:05.091101  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:05.127104  213635 cri.go:89] found id: ""
	I0414 17:49:05.127138  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.127149  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:05.127160  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:05.127176  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:05.143792  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:05.143828  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:05.218655  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:05.218680  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:05.218697  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:05.306169  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:05.306201  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:05.347150  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:05.347190  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:07.907355  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:07.920775  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:07.920854  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:07.958486  213635 cri.go:89] found id: ""
	I0414 17:49:07.958517  213635 logs.go:282] 0 containers: []
	W0414 17:49:07.958527  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:07.958534  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:07.958600  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:07.995351  213635 cri.go:89] found id: ""
	I0414 17:49:07.995383  213635 logs.go:282] 0 containers: []
	W0414 17:49:07.995394  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:07.995401  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:07.995464  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:08.031830  213635 cri.go:89] found id: ""
	I0414 17:49:08.031864  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.031876  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:08.031885  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:08.031953  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:08.072277  213635 cri.go:89] found id: ""
	I0414 17:49:08.072308  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.072321  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:08.072328  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:08.072400  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:08.107778  213635 cri.go:89] found id: ""
	I0414 17:49:08.107811  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.107823  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:08.107832  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:08.107889  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:08.144220  213635 cri.go:89] found id: ""
	I0414 17:49:08.144254  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.144267  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:08.144276  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:08.144350  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:08.199205  213635 cri.go:89] found id: ""
	I0414 17:49:08.199238  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.199251  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:08.199260  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:08.199329  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:08.236929  213635 cri.go:89] found id: ""
	I0414 17:49:08.236966  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.236978  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:08.236989  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:08.237006  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:08.288285  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:08.288309  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:08.301531  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:08.301562  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:08.370610  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:08.370643  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:08.370663  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:08.449517  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:08.449559  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:10.989149  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:11.004705  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:11.004776  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:11.044842  213635 cri.go:89] found id: ""
	I0414 17:49:11.044872  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.044882  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:11.044889  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:11.044944  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:11.079268  213635 cri.go:89] found id: ""
	I0414 17:49:11.079296  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.079306  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:11.079313  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:11.079373  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:11.111894  213635 cri.go:89] found id: ""
	I0414 17:49:11.111921  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.111931  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:11.111937  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:11.111993  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:11.147005  213635 cri.go:89] found id: ""
	I0414 17:49:11.147029  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.147039  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:11.147046  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:11.147115  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:11.181246  213635 cri.go:89] found id: ""
	I0414 17:49:11.181274  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.181281  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:11.181286  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:11.181333  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:11.222368  213635 cri.go:89] found id: ""
	I0414 17:49:11.222396  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.222404  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:11.222409  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:11.222455  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:11.262336  213635 cri.go:89] found id: ""
	I0414 17:49:11.262360  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.262367  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:11.262373  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:11.262430  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:11.305115  213635 cri.go:89] found id: ""
	I0414 17:49:11.305146  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.305157  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:11.305168  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:11.305180  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:11.340697  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:11.340726  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:11.390526  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:11.390566  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:11.403671  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:11.403699  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:11.478187  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:11.478210  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:11.478225  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:14.068187  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:14.082429  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:14.082502  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:14.118294  213635 cri.go:89] found id: ""
	I0414 17:49:14.118322  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.118333  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:14.118339  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:14.118399  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:14.150631  213635 cri.go:89] found id: ""
	I0414 17:49:14.150661  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.150673  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:14.150680  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:14.150739  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:14.182138  213635 cri.go:89] found id: ""
	I0414 17:49:14.182168  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.182178  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:14.182191  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:14.182245  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:14.215897  213635 cri.go:89] found id: ""
	I0414 17:49:14.215926  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.215939  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:14.215945  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:14.216007  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:14.250709  213635 cri.go:89] found id: ""
	I0414 17:49:14.250735  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.250745  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:14.250752  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:14.250827  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:14.284335  213635 cri.go:89] found id: ""
	I0414 17:49:14.284359  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.284369  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:14.284377  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:14.284437  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:14.320670  213635 cri.go:89] found id: ""
	I0414 17:49:14.320695  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.320705  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:14.320712  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:14.320772  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:14.352588  213635 cri.go:89] found id: ""
	I0414 17:49:14.352612  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.352620  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:14.352630  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:14.352643  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:14.402495  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:14.402527  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:14.415185  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:14.415211  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:14.484937  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:14.484961  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:14.484976  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:14.568927  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:14.568962  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:17.105989  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:17.119732  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:17.119803  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:17.155999  213635 cri.go:89] found id: ""
	I0414 17:49:17.156027  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.156038  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:17.156046  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:17.156117  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:17.190158  213635 cri.go:89] found id: ""
	I0414 17:49:17.190180  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.190188  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:17.190193  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:17.190254  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:17.228075  213635 cri.go:89] found id: ""
	I0414 17:49:17.228116  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.228128  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:17.228135  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:17.228199  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:17.276284  213635 cri.go:89] found id: ""
	I0414 17:49:17.276311  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.276321  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:17.276328  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:17.276391  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:17.323644  213635 cri.go:89] found id: ""
	I0414 17:49:17.323672  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.323684  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:17.323691  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:17.323755  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:17.361870  213635 cri.go:89] found id: ""
	I0414 17:49:17.361898  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.361910  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:17.361917  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:17.361978  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:17.396346  213635 cri.go:89] found id: ""
	I0414 17:49:17.396371  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.396382  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:17.396389  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:17.396450  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:17.434395  213635 cri.go:89] found id: ""
	I0414 17:49:17.434425  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.434434  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:17.434445  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:17.434460  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:17.486946  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:17.486987  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:17.504167  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:17.504200  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:17.596627  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:17.596655  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:17.596671  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:17.688874  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:17.688911  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:20.238457  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:20.252780  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:20.252859  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:20.299511  213635 cri.go:89] found id: ""
	I0414 17:49:20.299535  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.299543  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:20.299549  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:20.299607  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:20.346458  213635 cri.go:89] found id: ""
	I0414 17:49:20.346484  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.346493  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:20.346500  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:20.346552  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:20.390657  213635 cri.go:89] found id: ""
	I0414 17:49:20.390677  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.390684  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:20.390689  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:20.390738  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:20.435444  213635 cri.go:89] found id: ""
	I0414 17:49:20.435468  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.435474  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:20.435480  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:20.435520  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:20.470010  213635 cri.go:89] found id: ""
	I0414 17:49:20.470030  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.470036  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:20.470044  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:20.470089  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:20.517097  213635 cri.go:89] found id: ""
	I0414 17:49:20.517130  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.517141  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:20.517149  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:20.517216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:20.558688  213635 cri.go:89] found id: ""
	I0414 17:49:20.558717  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.558727  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:20.558733  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:20.558796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:20.598644  213635 cri.go:89] found id: ""
	I0414 17:49:20.598679  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.598687  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:20.598695  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:20.598706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:20.674514  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:20.674571  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:20.691779  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:20.691808  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:20.759608  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:20.759640  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:20.759652  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:20.852072  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:20.852104  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:23.392749  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:23.409465  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:23.409526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:23.449515  213635 cri.go:89] found id: ""
	I0414 17:49:23.449542  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.449552  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:23.449559  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:23.449609  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:23.490201  213635 cri.go:89] found id: ""
	I0414 17:49:23.490225  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.490234  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:23.490242  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:23.490294  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:23.528644  213635 cri.go:89] found id: ""
	I0414 17:49:23.528673  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.528684  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:23.528692  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:23.528755  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:23.572217  213635 cri.go:89] found id: ""
	I0414 17:49:23.572245  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.572256  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:23.572263  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:23.572319  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:23.612901  213635 cri.go:89] found id: ""
	I0414 17:49:23.612922  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.612930  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:23.612936  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:23.612981  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:23.668230  213635 cri.go:89] found id: ""
	I0414 17:49:23.668256  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.668265  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:23.668271  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:23.668322  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:23.714238  213635 cri.go:89] found id: ""
	I0414 17:49:23.714265  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.714275  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:23.714282  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:23.714331  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:23.763817  213635 cri.go:89] found id: ""
	I0414 17:49:23.763853  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.763863  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:23.763872  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:23.763884  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:23.836441  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:23.836486  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:23.861896  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:23.861940  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:23.944757  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:23.944787  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:23.944806  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:24.029884  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:24.029923  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:26.571950  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:26.585122  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:26.585180  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:26.623368  213635 cri.go:89] found id: ""
	I0414 17:49:26.623392  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.623401  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:26.623409  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:26.623463  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:26.657588  213635 cri.go:89] found id: ""
	I0414 17:49:26.657624  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.657635  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:26.657642  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:26.657699  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:26.690827  213635 cri.go:89] found id: ""
	I0414 17:49:26.690854  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.690862  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:26.690867  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:26.690916  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:26.732830  213635 cri.go:89] found id: ""
	I0414 17:49:26.732866  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.732876  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:26.732883  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:26.732946  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:26.767719  213635 cri.go:89] found id: ""
	I0414 17:49:26.767770  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.767783  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:26.767793  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:26.767861  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:26.805504  213635 cri.go:89] found id: ""
	I0414 17:49:26.805531  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.805540  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:26.805547  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:26.805607  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:26.848736  213635 cri.go:89] found id: ""
	I0414 17:49:26.848761  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.848769  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:26.848774  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:26.848831  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:26.888964  213635 cri.go:89] found id: ""
	I0414 17:49:26.888996  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.889006  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:26.889017  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:26.889030  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:26.902789  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:26.902819  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:26.984479  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:26.984503  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:26.984516  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:27.072453  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:27.072491  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:27.114247  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:27.114282  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:29.668064  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:29.685205  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:29.685289  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:29.729725  213635 cri.go:89] found id: ""
	I0414 17:49:29.729753  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.729760  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:29.729766  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:29.729823  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:29.788536  213635 cri.go:89] found id: ""
	I0414 17:49:29.788569  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.788581  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:29.788588  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:29.788656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:29.832032  213635 cri.go:89] found id: ""
	I0414 17:49:29.832060  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.832069  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:29.832074  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:29.832123  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:29.864981  213635 cri.go:89] found id: ""
	I0414 17:49:29.865009  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.865019  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:29.865025  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:29.865091  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:29.901024  213635 cri.go:89] found id: ""
	I0414 17:49:29.901060  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.901071  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:29.901079  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:29.901149  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:29.938790  213635 cri.go:89] found id: ""
	I0414 17:49:29.938820  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.938832  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:29.938840  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:29.938912  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:29.981414  213635 cri.go:89] found id: ""
	I0414 17:49:29.981445  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.981456  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:29.981463  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:29.981526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:30.022510  213635 cri.go:89] found id: ""
	I0414 17:49:30.022545  213635 logs.go:282] 0 containers: []
	W0414 17:49:30.022558  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:30.022571  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:30.022588  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:30.077221  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:30.077255  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:30.091513  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:30.091552  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:30.164964  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:30.164991  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:30.165004  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:30.246281  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:30.246321  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:32.807018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:32.825456  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:32.825531  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:32.864079  213635 cri.go:89] found id: ""
	I0414 17:49:32.864116  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.864126  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:32.864133  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:32.864191  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:32.905763  213635 cri.go:89] found id: ""
	I0414 17:49:32.905792  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.905806  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:32.905813  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:32.905894  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:32.944126  213635 cri.go:89] found id: ""
	I0414 17:49:32.944167  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.944186  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:32.944195  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:32.944258  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:32.983511  213635 cri.go:89] found id: ""
	I0414 17:49:32.983549  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.983562  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:32.983571  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:32.983629  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:33.021383  213635 cri.go:89] found id: ""
	I0414 17:49:33.021411  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.021422  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:33.021429  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:33.021488  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:33.058181  213635 cri.go:89] found id: ""
	I0414 17:49:33.058214  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.058225  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:33.058233  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:33.058296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:33.094426  213635 cri.go:89] found id: ""
	I0414 17:49:33.094459  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.094470  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:33.094479  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:33.094537  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:33.139392  213635 cri.go:89] found id: ""
	I0414 17:49:33.139430  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.139443  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:33.139455  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:33.139471  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:33.218814  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:33.218842  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:33.218860  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:33.325637  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:33.325678  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:33.363443  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:33.363473  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:33.427131  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:33.427167  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:35.942712  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:35.957936  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:35.958027  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:35.998316  213635 cri.go:89] found id: ""
	I0414 17:49:35.998343  213635 logs.go:282] 0 containers: []
	W0414 17:49:35.998354  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:35.998361  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:35.998419  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:36.032107  213635 cri.go:89] found id: ""
	I0414 17:49:36.032139  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.032149  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:36.032156  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:36.032211  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:36.070010  213635 cri.go:89] found id: ""
	I0414 17:49:36.070035  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.070043  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:36.070049  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:36.070104  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:36.105914  213635 cri.go:89] found id: ""
	I0414 17:49:36.105944  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.105962  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:36.105970  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:36.106036  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:36.140378  213635 cri.go:89] found id: ""
	I0414 17:49:36.140406  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.140418  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:36.140425  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:36.140487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:36.178535  213635 cri.go:89] found id: ""
	I0414 17:49:36.178564  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.178575  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:36.178583  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:36.178652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:36.217284  213635 cri.go:89] found id: ""
	I0414 17:49:36.217314  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.217324  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:36.217330  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:36.217391  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:36.251770  213635 cri.go:89] found id: ""
	I0414 17:49:36.251805  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.251818  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:36.251835  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:36.251850  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:36.322858  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:36.322906  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:36.337902  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:36.337939  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:36.415729  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:36.415752  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:36.415767  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:36.512960  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:36.513000  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:39.053905  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:39.068768  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:39.068841  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:39.104418  213635 cri.go:89] found id: ""
	I0414 17:49:39.104446  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.104454  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:39.104460  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:39.104520  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:39.144556  213635 cri.go:89] found id: ""
	I0414 17:49:39.144587  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.144598  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:39.144605  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:39.144673  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:39.184890  213635 cri.go:89] found id: ""
	I0414 17:49:39.184923  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.184936  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:39.184946  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:39.185018  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:39.224321  213635 cri.go:89] found id: ""
	I0414 17:49:39.224353  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.224364  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:39.224372  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:39.224431  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:39.275363  213635 cri.go:89] found id: ""
	I0414 17:49:39.275393  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.275403  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:39.275411  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:39.275469  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:39.324682  213635 cri.go:89] found id: ""
	I0414 17:49:39.324715  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.324725  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:39.324733  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:39.324788  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:39.356862  213635 cri.go:89] found id: ""
	I0414 17:49:39.356891  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.356901  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:39.356908  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:39.356970  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:39.392157  213635 cri.go:89] found id: ""
	I0414 17:49:39.392186  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.392197  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:39.392208  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:39.392223  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:39.484945  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:39.484971  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:39.484989  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:39.564891  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:39.564927  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:39.608513  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:39.608543  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:39.672726  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:39.672760  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:42.189948  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:42.203489  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:42.203560  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:42.243021  213635 cri.go:89] found id: ""
	I0414 17:49:42.243047  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.243057  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:42.243064  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:42.243152  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:42.285782  213635 cri.go:89] found id: ""
	I0414 17:49:42.285807  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.285817  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:42.285824  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:42.285898  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:42.318326  213635 cri.go:89] found id: ""
	I0414 17:49:42.318350  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.318360  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:42.318367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:42.318421  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:42.351765  213635 cri.go:89] found id: ""
	I0414 17:49:42.351788  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.351795  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:42.351802  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:42.351862  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:42.382539  213635 cri.go:89] found id: ""
	I0414 17:49:42.382564  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.382574  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:42.382582  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:42.382639  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:42.416009  213635 cri.go:89] found id: ""
	I0414 17:49:42.416034  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.416044  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:42.416051  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:42.416107  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:42.447820  213635 cri.go:89] found id: ""
	I0414 17:49:42.447860  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.447871  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:42.447879  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:42.447941  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:42.486157  213635 cri.go:89] found id: ""
	I0414 17:49:42.486179  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.486186  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:42.486195  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:42.486210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:42.556937  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:42.556963  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:42.556980  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:42.636537  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:42.636569  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:42.676688  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:42.676717  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:42.728391  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:42.728421  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:45.242452  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:45.256486  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:45.256558  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:45.291454  213635 cri.go:89] found id: ""
	I0414 17:49:45.291482  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.291490  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:45.291497  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:45.291552  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:45.328550  213635 cri.go:89] found id: ""
	I0414 17:49:45.328573  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.328583  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:45.328591  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:45.328638  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:45.365121  213635 cri.go:89] found id: ""
	I0414 17:49:45.365148  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.365155  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:45.365161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:45.365216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:45.402479  213635 cri.go:89] found id: ""
	I0414 17:49:45.402508  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.402519  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:45.402527  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:45.402580  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:45.433123  213635 cri.go:89] found id: ""
	I0414 17:49:45.433147  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.433155  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:45.433160  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:45.433206  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:45.466351  213635 cri.go:89] found id: ""
	I0414 17:49:45.466376  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.466383  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:45.466390  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:45.466442  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:45.498745  213635 cri.go:89] found id: ""
	I0414 17:49:45.498774  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.498785  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:45.498792  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:45.498866  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:45.531870  213635 cri.go:89] found id: ""
	I0414 17:49:45.531898  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.531908  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:45.531919  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:45.531937  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:45.582230  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:45.582257  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:45.597164  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:45.597197  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:45.666569  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:45.666598  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:45.666616  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:45.746036  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:45.746068  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:48.284590  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:48.297947  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:48.298019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:48.331443  213635 cri.go:89] found id: ""
	I0414 17:49:48.331469  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.331480  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:48.331487  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:48.331534  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:48.364569  213635 cri.go:89] found id: ""
	I0414 17:49:48.364602  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.364613  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:48.364620  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:48.364683  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:48.398063  213635 cri.go:89] found id: ""
	I0414 17:49:48.398097  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.398109  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:48.398118  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:48.398182  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:48.430783  213635 cri.go:89] found id: ""
	I0414 17:49:48.430808  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.430829  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:48.430837  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:48.430924  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:48.466378  213635 cri.go:89] found id: ""
	I0414 17:49:48.466410  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.466423  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:48.466432  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:48.466656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:48.499766  213635 cri.go:89] found id: ""
	I0414 17:49:48.499819  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.499829  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:48.499837  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:48.499901  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:48.533192  213635 cri.go:89] found id: ""
	I0414 17:49:48.533218  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.533228  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:48.533235  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:48.533294  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:48.565138  213635 cri.go:89] found id: ""
	I0414 17:49:48.565159  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.565167  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:48.565174  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:48.565183  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:48.616578  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:48.616609  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:48.630209  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:48.630232  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:48.697158  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:48.697184  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:48.697196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:48.777141  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:48.777177  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:51.322807  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:51.336971  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:51.337037  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:51.373592  213635 cri.go:89] found id: ""
	I0414 17:49:51.373616  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.373623  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:51.373628  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:51.373675  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:51.410753  213635 cri.go:89] found id: ""
	I0414 17:49:51.410782  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.410791  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:51.410796  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:51.410846  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:51.443612  213635 cri.go:89] found id: ""
	I0414 17:49:51.443639  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.443650  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:51.443656  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:51.443717  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:51.476956  213635 cri.go:89] found id: ""
	I0414 17:49:51.476982  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.476990  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:51.476995  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:51.477041  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:51.512295  213635 cri.go:89] found id: ""
	I0414 17:49:51.512330  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.512349  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:51.512357  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:51.512420  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:51.553410  213635 cri.go:89] found id: ""
	I0414 17:49:51.553437  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.553445  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:51.553451  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:51.553514  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:51.593165  213635 cri.go:89] found id: ""
	I0414 17:49:51.593196  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.593205  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:51.593210  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:51.593259  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:51.634382  213635 cri.go:89] found id: ""
	I0414 17:49:51.634425  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.634436  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:51.634446  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:51.634457  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:51.687688  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:51.687725  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:51.703569  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:51.703600  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:51.775371  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:51.775398  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:51.775414  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:51.851890  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:51.851936  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:54.389539  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:54.403233  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:54.403293  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:54.447655  213635 cri.go:89] found id: ""
	I0414 17:49:54.447675  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.447683  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:54.447690  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:54.447736  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:54.486882  213635 cri.go:89] found id: ""
	I0414 17:49:54.486905  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.486912  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:54.486917  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:54.486977  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:54.519544  213635 cri.go:89] found id: ""
	I0414 17:49:54.519570  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.519581  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:54.519588  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:54.519643  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:54.558646  213635 cri.go:89] found id: ""
	I0414 17:49:54.558671  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.558681  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:54.558689  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:54.558735  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:54.600650  213635 cri.go:89] found id: ""
	I0414 17:49:54.600674  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.600680  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:54.600685  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:54.600732  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:54.641206  213635 cri.go:89] found id: ""
	I0414 17:49:54.641231  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.641240  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:54.641247  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:54.641302  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:54.680671  213635 cri.go:89] found id: ""
	I0414 17:49:54.680698  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.680708  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:54.680715  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:54.680765  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:54.721028  213635 cri.go:89] found id: ""
	I0414 17:49:54.721050  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.721056  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:54.721066  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:54.721076  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:54.769755  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:54.769782  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:54.785252  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:54.785273  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:54.855288  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:54.855308  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:54.855322  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:54.952695  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:54.952735  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:57.499933  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:57.514593  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:57.514658  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:57.549526  213635 cri.go:89] found id: ""
	I0414 17:49:57.549550  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.549558  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:57.549564  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:57.549610  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:57.582596  213635 cri.go:89] found id: ""
	I0414 17:49:57.582626  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.582637  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:57.582643  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:57.582695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:57.622214  213635 cri.go:89] found id: ""
	I0414 17:49:57.622244  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.622252  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:57.622257  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:57.622313  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:57.655388  213635 cri.go:89] found id: ""
	I0414 17:49:57.655415  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.655422  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:57.655428  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:57.655474  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:57.692324  213635 cri.go:89] found id: ""
	I0414 17:49:57.692349  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.692357  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:57.692362  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:57.692407  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:57.725614  213635 cri.go:89] found id: ""
	I0414 17:49:57.725637  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.725644  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:57.725650  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:57.725700  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:57.757747  213635 cri.go:89] found id: ""
	I0414 17:49:57.757779  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.757788  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:57.757794  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:57.757868  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:57.791614  213635 cri.go:89] found id: ""
	I0414 17:49:57.791651  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.791658  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:57.791666  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:57.791676  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:57.839950  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:57.839983  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:57.852850  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:57.852877  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:57.925310  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:57.925338  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:57.925355  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:58.008445  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:58.008484  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:00.550402  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:00.564239  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:00.564296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:00.598410  213635 cri.go:89] found id: ""
	I0414 17:50:00.598439  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.598447  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:00.598452  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:00.598500  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:00.629470  213635 cri.go:89] found id: ""
	I0414 17:50:00.629489  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.629497  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:00.629502  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:00.629547  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:00.660663  213635 cri.go:89] found id: ""
	I0414 17:50:00.660686  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.660695  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:00.660703  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:00.660780  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:00.703422  213635 cri.go:89] found id: ""
	I0414 17:50:00.703450  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.703461  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:00.703467  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:00.703524  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:00.736355  213635 cri.go:89] found id: ""
	I0414 17:50:00.736378  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.736388  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:00.736394  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:00.736447  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:00.771432  213635 cri.go:89] found id: ""
	I0414 17:50:00.771460  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.771470  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:00.771478  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:00.771544  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:00.804453  213635 cri.go:89] found id: ""
	I0414 17:50:00.804474  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.804483  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:00.804490  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:00.804550  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:00.840934  213635 cri.go:89] found id: ""
	I0414 17:50:00.840962  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.840971  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:00.840982  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:00.840994  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:00.888813  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:00.888846  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:00.901168  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:00.901188  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:00.970608  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:00.970638  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:00.970655  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:01.054190  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:01.054225  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:03.592930  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:03.607476  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:03.607542  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:03.647536  213635 cri.go:89] found id: ""
	I0414 17:50:03.647559  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.647567  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:03.647572  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:03.647616  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:03.687053  213635 cri.go:89] found id: ""
	I0414 17:50:03.687078  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.687086  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:03.687092  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:03.687135  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:03.724232  213635 cri.go:89] found id: ""
	I0414 17:50:03.724258  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.724268  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:03.724276  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:03.724327  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:03.758621  213635 cri.go:89] found id: ""
	I0414 17:50:03.758650  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.758661  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:03.758668  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:03.758735  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:03.792524  213635 cri.go:89] found id: ""
	I0414 17:50:03.792553  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.792563  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:03.792570  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:03.792623  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:03.823533  213635 cri.go:89] found id: ""
	I0414 17:50:03.823562  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.823569  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:03.823575  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:03.823619  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:03.855038  213635 cri.go:89] found id: ""
	I0414 17:50:03.855060  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.855067  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:03.855072  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:03.855122  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:03.886260  213635 cri.go:89] found id: ""
	I0414 17:50:03.886288  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.886296  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:03.886304  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:03.886314  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:03.935750  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:03.935780  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:03.948571  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:03.948599  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:04.016600  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:04.016625  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:04.016641  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:04.095247  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:04.095278  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:06.633583  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:06.647292  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:06.647371  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:06.680994  213635 cri.go:89] found id: ""
	I0414 17:50:06.681023  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.681031  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:06.681036  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:06.681093  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:06.715235  213635 cri.go:89] found id: ""
	I0414 17:50:06.715262  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.715269  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:06.715275  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:06.715333  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:06.750320  213635 cri.go:89] found id: ""
	I0414 17:50:06.750349  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.750359  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:06.750367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:06.750425  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:06.781634  213635 cri.go:89] found id: ""
	I0414 17:50:06.781657  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.781666  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:06.781673  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:06.781731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:06.812684  213635 cri.go:89] found id: ""
	I0414 17:50:06.812709  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.812719  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:06.812727  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:06.812785  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:06.843417  213635 cri.go:89] found id: ""
	I0414 17:50:06.843447  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.843458  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:06.843466  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:06.843519  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:06.878915  213635 cri.go:89] found id: ""
	I0414 17:50:06.878943  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.878952  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:06.878958  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:06.879018  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:06.911647  213635 cri.go:89] found id: ""
	I0414 17:50:06.911670  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.911680  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:06.911705  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:06.911720  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:06.977253  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:06.977286  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:06.977304  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:07.056442  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:07.056475  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:07.104053  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:07.104082  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:07.153444  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:07.153483  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:09.667392  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:09.680695  213635 kubeadm.go:597] duration metric: took 4m3.288338716s to restartPrimaryControlPlane
	W0414 17:50:09.680757  213635 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:50:09.680787  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:50:15.123013  213635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.442204913s)
	I0414 17:50:15.123098  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:15.137541  213635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:50:15.147676  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:50:15.157224  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:50:15.157238  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:50:15.157273  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:50:15.166484  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:50:15.166525  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:50:15.175831  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:50:15.184692  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:50:15.184731  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:50:15.193871  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:50:15.202947  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:50:15.202993  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:50:15.212451  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:50:15.221477  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:50:15.221512  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:50:15.231277  213635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:50:15.294259  213635 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:50:15.294330  213635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:50:15.422321  213635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:50:15.422476  213635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:50:15.422622  213635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:50:15.596146  213635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:50:15.598667  213635 out.go:235]   - Generating certificates and keys ...
	I0414 17:50:15.598769  213635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:50:15.598859  213635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:50:15.598976  213635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:50:15.599034  213635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:50:15.599148  213635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:50:15.599238  213635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:50:15.599301  213635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:50:15.599353  213635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:50:15.599416  213635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:50:15.599514  213635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:50:15.599573  213635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:50:15.599654  213635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:50:15.664653  213635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:50:15.743669  213635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:50:15.813965  213635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:50:16.089174  213635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:50:16.103702  213635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:50:16.104792  213635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:50:16.104884  213635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:50:16.250169  213635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:50:16.252518  213635 out.go:235]   - Booting up control plane ...
	I0414 17:50:16.252640  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:50:16.262331  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:50:16.263648  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:50:16.264988  213635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:50:16.267648  213635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:50:56.269443  213635 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:50:56.270353  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:50:56.270523  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:01.271007  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:01.271253  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:11.271837  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:11.272049  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:31.273087  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:31.273315  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:11.275552  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:11.275856  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:11.275878  213635 kubeadm.go:310] 
	I0414 17:52:11.275927  213635 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:52:11.275981  213635 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:52:11.275991  213635 kubeadm.go:310] 
	I0414 17:52:11.276038  213635 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:52:11.276092  213635 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:52:11.276213  213635 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:52:11.276222  213635 kubeadm.go:310] 
	I0414 17:52:11.276375  213635 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:52:11.276431  213635 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:52:11.276482  213635 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:52:11.276502  213635 kubeadm.go:310] 
	I0414 17:52:11.276617  213635 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:52:11.276722  213635 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:52:11.276733  213635 kubeadm.go:310] 
	I0414 17:52:11.276827  213635 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:52:11.276902  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:52:11.276994  213635 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:52:11.277119  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:52:11.277137  213635 kubeadm.go:310] 
	I0414 17:52:11.277720  213635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:52:11.277871  213635 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:52:11.277974  213635 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 17:52:11.278218  213635 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 17:52:11.278258  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:52:11.738009  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:52:11.752929  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:52:11.762849  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:52:11.762865  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:52:11.762901  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:52:11.772188  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:52:11.772240  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:52:11.781466  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:52:11.790582  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:52:11.790624  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:52:11.799766  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:52:11.808443  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:52:11.808481  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:52:11.817544  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:52:11.826418  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:52:11.826464  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:52:11.835946  213635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:52:11.910031  213635 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:52:11.910113  213635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:52:12.048882  213635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:52:12.049032  213635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:52:12.049160  213635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:52:12.216124  213635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:52:12.218841  213635 out.go:235]   - Generating certificates and keys ...
	I0414 17:52:12.218938  213635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:52:12.219030  213635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:52:12.219153  213635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:52:12.219244  213635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:52:12.219342  213635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:52:12.219420  213635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:52:12.219507  213635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:52:12.219612  213635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:52:12.219690  213635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:52:12.219802  213635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:52:12.219867  213635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:52:12.219917  213635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:52:12.485118  213635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:52:12.699901  213635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:52:12.798407  213635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:52:12.941803  213635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:52:12.964937  213635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:52:12.965897  213635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:52:12.966059  213635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:52:13.109607  213635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:52:13.112109  213635 out.go:235]   - Booting up control plane ...
	I0414 17:52:13.112248  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:52:13.115664  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:52:13.117940  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:52:13.119128  213635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:52:13.123525  213635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:52:53.126895  213635 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:52:53.127019  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:53.127237  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:58.127800  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:58.127997  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:53:08.128675  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:53:08.128878  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:53:28.129416  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:53:28.129642  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:54:08.127998  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:54:08.128303  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:54:08.128326  213635 kubeadm.go:310] 
	I0414 17:54:08.128362  213635 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:54:08.128505  213635 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:54:08.128527  213635 kubeadm.go:310] 
	I0414 17:54:08.128595  213635 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:54:08.128640  213635 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:54:08.128791  213635 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:54:08.128814  213635 kubeadm.go:310] 
	I0414 17:54:08.128946  213635 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:54:08.128997  213635 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:54:08.129043  213635 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:54:08.129052  213635 kubeadm.go:310] 
	I0414 17:54:08.129167  213635 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:54:08.129296  213635 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:54:08.129314  213635 kubeadm.go:310] 
	I0414 17:54:08.129479  213635 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:54:08.129615  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:54:08.129706  213635 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:54:08.129814  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:54:08.129824  213635 kubeadm.go:310] 
	I0414 17:54:08.130345  213635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:54:08.130443  213635 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:54:08.130555  213635 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 17:54:08.130646  213635 kubeadm.go:394] duration metric: took 8m1.792756267s to StartCluster
	I0414 17:54:08.130721  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:54:08.130802  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:54:08.175207  213635 cri.go:89] found id: ""
	I0414 17:54:08.175243  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.175251  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:54:08.175257  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:54:08.175311  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:54:08.209345  213635 cri.go:89] found id: ""
	I0414 17:54:08.209370  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.209377  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:54:08.209382  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:54:08.209428  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:54:08.244901  213635 cri.go:89] found id: ""
	I0414 17:54:08.244937  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.244946  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:54:08.244952  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:54:08.245022  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:54:08.279974  213635 cri.go:89] found id: ""
	I0414 17:54:08.279999  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.280006  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:54:08.280011  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:54:08.280065  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:54:08.312666  213635 cri.go:89] found id: ""
	I0414 17:54:08.312691  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.312701  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:54:08.312708  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:54:08.312761  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:54:08.345579  213635 cri.go:89] found id: ""
	I0414 17:54:08.345609  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.345619  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:54:08.345627  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:54:08.345682  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:54:08.377810  213635 cri.go:89] found id: ""
	I0414 17:54:08.377844  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.377853  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:54:08.377858  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:54:08.377900  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:54:08.409648  213635 cri.go:89] found id: ""
	I0414 17:54:08.409673  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.409681  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:54:08.409697  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:54:08.409708  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:54:08.422905  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:54:08.422930  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:54:08.495193  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:54:08.495217  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:54:08.495232  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:54:08.603072  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:54:08.603108  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:54:08.640028  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:54:08.640058  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0414 17:54:08.690480  213635 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 17:54:08.690537  213635 out.go:270] * 
	* 
	W0414 17:54:08.690590  213635 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:54:08.690605  213635 out.go:270] * 
	* 
	W0414 17:54:08.691392  213635 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 17:54:08.694565  213635 out.go:201] 
	W0414 17:54:08.695675  213635 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:54:08.695709  213635 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 17:54:08.695724  213635 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 17:54:08.697684  213635 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-768580 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 2 (233.753619ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-768580 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-768580 logs -n 25: (1.045233341s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-061428       | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-418468            | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-768580        | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-418468                 | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-768580                              | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-768580             | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-768580                              | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-061428                           | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	| image   | no-preload-721806 image list                           | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	| delete  | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	| image   | embed-certs-418468 image list                          | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	| delete  | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:45:23
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:45:23.282546  213635 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:45:23.282636  213635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:23.282647  213635 out.go:358] Setting ErrFile to fd 2...
	I0414 17:45:23.282663  213635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:23.282871  213635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:45:23.283429  213635 out.go:352] Setting JSON to false
	I0414 17:45:23.284348  213635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8821,"bootTime":1744643902,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:45:23.284402  213635 start.go:139] virtualization: kvm guest
	I0414 17:45:23.286322  213635 out.go:177] * [old-k8s-version-768580] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:45:23.287426  213635 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:45:23.287431  213635 notify.go:220] Checking for updates...
	I0414 17:45:23.289881  213635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:45:23.291059  213635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:45:23.292002  213635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:45:23.293350  213635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:45:23.294814  213635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:45:23.296431  213635 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:45:23.296945  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:23.296998  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:23.313119  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0414 17:45:23.313580  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:23.314124  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:23.314148  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:23.314493  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:23.314664  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:23.316572  213635 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 17:45:23.317553  213635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:45:23.317841  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:23.317876  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:23.333791  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0414 17:45:23.334298  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:23.334832  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:23.334859  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:23.335206  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:23.335410  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:23.372523  213635 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 17:45:23.373766  213635 start.go:297] selected driver: kvm2
	I0414 17:45:23.373785  213635 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:23.373971  213635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:45:23.374697  213635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:45:23.374756  213635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:45:23.390328  213635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:45:23.390891  213635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:45:23.390939  213635 cni.go:84] Creating CNI manager for ""
	I0414 17:45:23.390997  213635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:23.391057  213635 start.go:340] cluster config:
	{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:23.391177  213635 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:45:23.393503  213635 out.go:177] * Starting "old-k8s-version-768580" primary control-plane node in "old-k8s-version-768580" cluster
	I0414 17:45:18.829481  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Start
	I0414 17:45:18.829626  213406 main.go:141] libmachine: (embed-certs-418468) starting domain...
	I0414 17:45:18.829645  213406 main.go:141] libmachine: (embed-certs-418468) ensuring networks are active...
	I0414 17:45:18.830375  213406 main.go:141] libmachine: (embed-certs-418468) Ensuring network default is active
	I0414 17:45:18.830697  213406 main.go:141] libmachine: (embed-certs-418468) Ensuring network mk-embed-certs-418468 is active
	I0414 17:45:18.831060  213406 main.go:141] libmachine: (embed-certs-418468) getting domain XML...
	I0414 17:45:18.831881  213406 main.go:141] libmachine: (embed-certs-418468) creating domain...
	I0414 17:45:20.130585  213406 main.go:141] libmachine: (embed-certs-418468) waiting for IP...
	I0414 17:45:20.131429  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.131906  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.131976  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.131884  213441 retry.go:31] will retry after 192.442813ms: waiting for domain to come up
	I0414 17:45:20.326250  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.326808  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.326847  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.326777  213441 retry.go:31] will retry after 380.44265ms: waiting for domain to come up
	I0414 17:45:20.709212  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.709718  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.709747  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.709659  213441 retry.go:31] will retry after 412.048423ms: waiting for domain to come up
	I0414 17:45:21.123129  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:21.123522  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:21.123544  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:21.123486  213441 retry.go:31] will retry after 384.561435ms: waiting for domain to come up
	I0414 17:45:21.510029  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:21.510559  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:21.510591  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:21.510521  213441 retry.go:31] will retry after 501.73701ms: waiting for domain to come up
	I0414 17:45:22.014298  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:22.014882  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:22.014914  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:22.014842  213441 retry.go:31] will retry after 757.183938ms: waiting for domain to come up
	I0414 17:45:22.773705  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:22.774323  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:22.774350  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:22.774269  213441 retry.go:31] will retry after 986.137988ms: waiting for domain to come up
	I0414 17:45:20.888278  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:23.386664  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:24.646290  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:27.145214  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:23.394590  213635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:45:23.394621  213635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 17:45:23.394628  213635 cache.go:56] Caching tarball of preloaded images
	I0414 17:45:23.394721  213635 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:45:23.394735  213635 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 17:45:23.394836  213635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:45:23.395013  213635 start.go:360] acquireMachinesLock for old-k8s-version-768580: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:45:23.762349  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:23.762955  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:23.762979  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:23.762917  213441 retry.go:31] will retry after 1.10793688s: waiting for domain to come up
	I0414 17:45:24.872355  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:24.872838  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:24.872868  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:24.872798  213441 retry.go:31] will retry after 1.289889749s: waiting for domain to come up
	I0414 17:45:26.163838  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:26.164300  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:26.164340  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:26.164276  213441 retry.go:31] will retry after 1.779294897s: waiting for domain to come up
	I0414 17:45:27.946417  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:27.946918  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:27.946955  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:27.946893  213441 retry.go:31] will retry after 1.873070528s: waiting for domain to come up
	I0414 17:45:25.887339  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:27.888458  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:30.386702  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:29.147468  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:31.647410  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:29.821493  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:29.822082  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:29.822114  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:29.822017  213441 retry.go:31] will retry after 2.200299666s: waiting for domain to come up
	I0414 17:45:32.024275  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:32.024774  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:32.024804  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:32.024731  213441 retry.go:31] will retry after 4.490034828s: waiting for domain to come up
	I0414 17:45:32.885679  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:34.886662  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:34.145579  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:36.146382  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.146697  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.262514  213635 start.go:364] duration metric: took 14.867477628s to acquireMachinesLock for "old-k8s-version-768580"
	I0414 17:45:38.262567  213635 start.go:96] Skipping create...Using existing machine configuration
	I0414 17:45:38.262576  213635 fix.go:54] fixHost starting: 
	I0414 17:45:38.262931  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:38.262975  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:38.282724  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0414 17:45:38.283218  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:38.283779  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:38.283810  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:38.284194  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:38.284403  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:38.284564  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetState
	I0414 17:45:38.285903  213635 fix.go:112] recreateIfNeeded on old-k8s-version-768580: state=Stopped err=<nil>
	I0414 17:45:38.285937  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	W0414 17:45:38.286051  213635 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 17:45:38.287537  213635 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-768580" ...
	I0414 17:45:36.517497  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.518002  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has current primary IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.518029  213406 main.go:141] libmachine: (embed-certs-418468) found domain IP: 192.168.50.199
	I0414 17:45:36.518042  213406 main.go:141] libmachine: (embed-certs-418468) reserving static IP address...
	I0414 17:45:36.518423  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "embed-certs-418468", mac: "52:54:00:2f:33:03", ip: "192.168.50.199"} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.518454  213406 main.go:141] libmachine: (embed-certs-418468) DBG | skip adding static IP to network mk-embed-certs-418468 - found existing host DHCP lease matching {name: "embed-certs-418468", mac: "52:54:00:2f:33:03", ip: "192.168.50.199"}
	I0414 17:45:36.518467  213406 main.go:141] libmachine: (embed-certs-418468) reserved static IP address 192.168.50.199 for domain embed-certs-418468
	I0414 17:45:36.518485  213406 main.go:141] libmachine: (embed-certs-418468) waiting for SSH...
	I0414 17:45:36.518500  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Getting to WaitForSSH function...
	I0414 17:45:36.520360  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.520616  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.520653  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.520758  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Using SSH client type: external
	I0414 17:45:36.520776  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa (-rw-------)
	I0414 17:45:36.520809  213406 main.go:141] libmachine: (embed-certs-418468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:45:36.520821  213406 main.go:141] libmachine: (embed-certs-418468) DBG | About to run SSH command:
	I0414 17:45:36.520831  213406 main.go:141] libmachine: (embed-certs-418468) DBG | exit 0
	I0414 17:45:36.649576  213406 main.go:141] libmachine: (embed-certs-418468) DBG | SSH cmd err, output: <nil>: 
	I0414 17:45:36.649973  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetConfigRaw
	I0414 17:45:36.650596  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:36.653078  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.653409  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.653438  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.653654  213406 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/config.json ...
	I0414 17:45:36.653850  213406 machine.go:93] provisionDockerMachine start ...
	I0414 17:45:36.653883  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:36.654093  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.656193  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.656501  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.656527  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.656658  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.656818  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.656950  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.657070  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.657214  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.657429  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.657439  213406 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:45:36.765740  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 17:45:36.765765  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:36.766013  213406 buildroot.go:166] provisioning hostname "embed-certs-418468"
	I0414 17:45:36.766041  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:36.766237  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.768833  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.769137  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.769162  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.769335  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.769500  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.769623  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.769731  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.769886  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.770105  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.770120  213406 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-418468 && echo "embed-certs-418468" | sudo tee /etc/hostname
	I0414 17:45:36.893279  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-418468
	
	I0414 17:45:36.893301  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.896024  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.896386  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.896415  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.896583  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.896764  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.896953  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.897101  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.897270  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.897545  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.897570  213406 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-418468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-418468/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-418468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:45:37.024782  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:45:37.024811  213406 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:45:37.024840  213406 buildroot.go:174] setting up certificates
	I0414 17:45:37.024850  213406 provision.go:84] configureAuth start
	I0414 17:45:37.024858  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:37.025122  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:37.027788  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.028176  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.028213  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.028409  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.030616  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.030956  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.030981  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.031177  213406 provision.go:143] copyHostCerts
	I0414 17:45:37.031234  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:45:37.031248  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:45:37.031310  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:45:37.031401  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:45:37.031409  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:45:37.031435  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:45:37.031497  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:45:37.031504  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:45:37.031523  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:45:37.031647  213406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.embed-certs-418468 san=[127.0.0.1 192.168.50.199 embed-certs-418468 localhost minikube]
	I0414 17:45:37.627895  213406 provision.go:177] copyRemoteCerts
	I0414 17:45:37.627953  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:45:37.627976  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.630648  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.630947  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.630970  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.631155  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:37.631352  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.631526  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:37.631687  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:37.716473  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 17:45:37.739929  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:45:37.762662  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 17:45:37.785121  213406 provision.go:87] duration metric: took 760.257482ms to configureAuth
	I0414 17:45:37.785152  213406 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:45:37.785381  213406 config.go:182] Loaded profile config "embed-certs-418468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:45:37.785455  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.788353  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.788678  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.788705  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.788883  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:37.789017  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.789194  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.789409  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:37.789591  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:37.789865  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:37.789886  213406 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:45:38.021469  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:45:38.021530  213406 machine.go:96] duration metric: took 1.367637028s to provisionDockerMachine
	I0414 17:45:38.021548  213406 start.go:293] postStartSetup for "embed-certs-418468" (driver="kvm2")
	I0414 17:45:38.021567  213406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:45:38.021593  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.021949  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:45:38.021980  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.024762  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.025141  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.025169  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.025357  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.025523  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.025702  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.025862  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.112512  213406 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:45:38.116757  213406 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:45:38.116780  213406 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:45:38.116832  213406 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:45:38.116909  213406 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:45:38.116994  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:45:38.126428  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:38.149529  213406 start.go:296] duration metric: took 127.965801ms for postStartSetup
	I0414 17:45:38.149559  213406 fix.go:56] duration metric: took 19.339332592s for fixHost
	I0414 17:45:38.149597  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.152452  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.152857  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.152886  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.153029  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.153208  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.153357  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.153527  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.153719  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:38.153980  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:38.153992  213406 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:45:38.262398  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652738.233356501
	
	I0414 17:45:38.262419  213406 fix.go:216] guest clock: 1744652738.233356501
	I0414 17:45:38.262426  213406 fix.go:229] Guest: 2025-04-14 17:45:38.233356501 +0000 UTC Remote: 2025-04-14 17:45:38.149564097 +0000 UTC m=+19.473974968 (delta=83.792404ms)
	I0414 17:45:38.262443  213406 fix.go:200] guest clock delta is within tolerance: 83.792404ms
	I0414 17:45:38.262448  213406 start.go:83] releasing machines lock for "embed-certs-418468", held for 19.452231962s
	I0414 17:45:38.262473  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.262756  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:38.265776  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.266164  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.266194  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.266350  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.266870  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.267040  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.267139  213406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:45:38.267189  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.267240  213406 ssh_runner.go:195] Run: cat /version.json
	I0414 17:45:38.267261  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.269779  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270093  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.270121  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270142  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270286  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.270481  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.270582  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.270601  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270633  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.270844  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.270834  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.270994  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.271141  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.271286  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.360262  213406 ssh_runner.go:195] Run: systemctl --version
	I0414 17:45:38.384263  213406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:45:38.531682  213406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:45:38.539705  213406 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:45:38.539793  213406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:45:38.557292  213406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:45:38.557314  213406 start.go:495] detecting cgroup driver to use...
	I0414 17:45:38.557377  213406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:45:38.573739  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:45:38.587350  213406 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:45:38.587392  213406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:45:38.601142  213406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:45:38.615569  213406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:45:38.729585  213406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:45:38.866071  213406 docker.go:233] disabling docker service ...
	I0414 17:45:38.866151  213406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:45:38.881173  213406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:45:38.895808  213406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:45:39.055748  213406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:45:39.185218  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:45:39.200427  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:45:39.223755  213406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 17:45:39.223823  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.235661  213406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:45:39.235737  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.248125  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.260302  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.270988  213406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:45:39.281488  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.293593  213406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.314797  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.325696  213406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:45:39.334593  213406 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:45:39.334634  213406 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:45:39.347505  213406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:45:39.357965  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:39.484049  213406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:45:39.597745  213406 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:45:39.597853  213406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:45:39.602871  213406 start.go:563] Will wait 60s for crictl version
	I0414 17:45:39.602925  213406 ssh_runner.go:195] Run: which crictl
	I0414 17:45:39.606796  213406 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:45:39.649955  213406 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:45:39.650046  213406 ssh_runner.go:195] Run: crio --version
	I0414 17:45:39.681673  213406 ssh_runner.go:195] Run: crio --version
	I0414 17:45:39.710974  213406 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 17:45:36.888095  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:39.387438  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:40.148510  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:42.647398  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.288730  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .Start
	I0414 17:45:38.288903  213635 main.go:141] libmachine: (old-k8s-version-768580) starting domain...
	I0414 17:45:38.288928  213635 main.go:141] libmachine: (old-k8s-version-768580) ensuring networks are active...
	I0414 17:45:38.289671  213635 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network default is active
	I0414 17:45:38.290082  213635 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network mk-old-k8s-version-768580 is active
	I0414 17:45:38.290509  213635 main.go:141] libmachine: (old-k8s-version-768580) getting domain XML...
	I0414 17:45:38.291270  213635 main.go:141] libmachine: (old-k8s-version-768580) creating domain...
	I0414 17:45:39.584359  213635 main.go:141] libmachine: (old-k8s-version-768580) waiting for IP...
	I0414 17:45:39.585518  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:39.586108  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:39.586195  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:39.586107  213733 retry.go:31] will retry after 251.417692ms: waiting for domain to come up
	I0414 17:45:39.839778  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:39.840371  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:39.840397  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:39.840338  213733 retry.go:31] will retry after 258.330025ms: waiting for domain to come up
	I0414 17:45:40.100989  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:40.101667  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:40.101696  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:40.101631  213733 retry.go:31] will retry after 334.368733ms: waiting for domain to come up
	I0414 17:45:40.437266  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:40.438218  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:40.438251  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:40.438188  213733 retry.go:31] will retry after 588.313555ms: waiting for domain to come up
	I0414 17:45:41.027969  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:41.028685  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:41.028713  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:41.028667  213733 retry.go:31] will retry after 582.787602ms: waiting for domain to come up
	I0414 17:45:41.613756  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:41.614424  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:41.614476  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:41.614383  213733 retry.go:31] will retry after 695.01431ms: waiting for domain to come up
	I0414 17:45:42.311573  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:42.312134  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:42.312168  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:42.312092  213733 retry.go:31] will retry after 1.050124039s: waiting for domain to come up
	I0414 17:45:39.712262  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:39.715292  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:39.715742  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:39.715790  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:39.715889  213406 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 17:45:39.720056  213406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:39.736486  213406 kubeadm.go:883] updating cluster {Name:embed-certs-418468 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-
418468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:45:39.736610  213406 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:45:39.736663  213406 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:39.774478  213406 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 17:45:39.774571  213406 ssh_runner.go:195] Run: which lz4
	I0414 17:45:39.778933  213406 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:45:39.783254  213406 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:45:39.783294  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 17:45:41.221460  213406 crio.go:462] duration metric: took 1.44257368s to copy over tarball
	I0414 17:45:41.221534  213406 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:45:43.485855  213406 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.264254914s)
	I0414 17:45:43.485888  213406 crio.go:469] duration metric: took 2.264398504s to extract the tarball
	I0414 17:45:43.485899  213406 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:45:43.525207  213406 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:43.573036  213406 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:45:43.573060  213406 cache_images.go:84] Images are preloaded, skipping loading
	I0414 17:45:43.573068  213406 kubeadm.go:934] updating node { 192.168.50.199 8443 v1.32.2 crio true true} ...
	I0414 17:45:43.573156  213406 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-418468 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-418468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:45:43.573214  213406 ssh_runner.go:195] Run: crio config
	I0414 17:45:43.633728  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:45:43.633753  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:43.633765  213406 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:45:43.633791  213406 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.199 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-418468 NodeName:embed-certs-418468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 17:45:43.633949  213406 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-418468"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.199"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.199"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:45:43.634013  213406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 17:45:43.644883  213406 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:45:43.644955  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:45:43.658054  213406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0414 17:45:43.678542  213406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:45:43.698007  213406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0414 17:45:41.888968  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:44.387515  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:45.147015  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:47.147667  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:43.363977  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:43.364593  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:43.364642  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:43.364568  213733 retry.go:31] will retry after 1.011314768s: waiting for domain to come up
	I0414 17:45:44.377753  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:44.378268  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:44.378293  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:44.378225  213733 retry.go:31] will retry after 1.856494831s: waiting for domain to come up
	I0414 17:45:46.237268  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:46.237851  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:46.237881  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:46.237785  213733 retry.go:31] will retry after 1.749079149s: waiting for domain to come up
	I0414 17:45:47.990039  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:47.990637  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:47.990670  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:47.990601  213733 retry.go:31] will retry after 2.63350321s: waiting for domain to come up
	I0414 17:45:43.715966  213406 ssh_runner.go:195] Run: grep 192.168.50.199	control-plane.minikube.internal$ /etc/hosts
	I0414 17:45:43.720022  213406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:43.733445  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:43.867405  213406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:45:43.885300  213406 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468 for IP: 192.168.50.199
	I0414 17:45:43.885324  213406 certs.go:194] generating shared ca certs ...
	I0414 17:45:43.885345  213406 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:45:43.885512  213406 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:45:43.885584  213406 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:45:43.885601  213406 certs.go:256] generating profile certs ...
	I0414 17:45:43.885706  213406 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/client.key
	I0414 17:45:43.885782  213406 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.key.3a11cdbe
	I0414 17:45:43.885845  213406 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.key
	I0414 17:45:43.885996  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:45:43.886046  213406 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:45:43.886061  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:45:43.886092  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:45:43.886126  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:45:43.886156  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:45:43.886211  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:43.886983  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:45:43.924611  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:45:43.964084  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:45:43.987697  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:45:44.015900  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 17:45:44.040754  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:45:44.075038  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:45:44.099117  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 17:45:44.122932  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:45:44.147023  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:45:44.173790  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:45:44.196542  213406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:45:44.214709  213406 ssh_runner.go:195] Run: openssl version
	I0414 17:45:44.220535  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:45:44.235491  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.240204  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.240265  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.246067  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:45:44.257501  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:45:44.269005  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.273740  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.273793  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.279740  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:45:44.291167  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:45:44.302992  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.307551  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.307597  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.313737  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:45:44.324505  213406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:45:44.328835  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:45:44.334805  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:45:44.340659  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:45:44.346307  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:45:44.351874  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:45:44.357745  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:45:44.363409  213406 kubeadm.go:392] StartCluster: {Name:embed-certs-418468 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-418
468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:44.363503  213406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:45:44.363553  213406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:45:44.409542  213406 cri.go:89] found id: ""
	I0414 17:45:44.409612  213406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:45:44.421483  213406 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 17:45:44.421502  213406 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 17:45:44.421553  213406 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 17:45:44.432611  213406 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:45:44.433322  213406 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-418468" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:45:44.433670  213406 kubeconfig.go:62] /home/jenkins/minikube-integration/20349-149500/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-418468" cluster setting kubeconfig missing "embed-certs-418468" context setting]
	I0414 17:45:44.434350  213406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:45:44.435960  213406 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 17:45:44.447295  213406 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.199
	I0414 17:45:44.447335  213406 kubeadm.go:1160] stopping kube-system containers ...
	I0414 17:45:44.447349  213406 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 17:45:44.447402  213406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:45:44.483842  213406 cri.go:89] found id: ""
	I0414 17:45:44.483928  213406 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:45:44.501457  213406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:45:44.511344  213406 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:45:44.511360  213406 kubeadm.go:157] found existing configuration files:
	
	I0414 17:45:44.511408  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:45:44.520512  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:45:44.520561  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:45:44.530434  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:45:44.539618  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:45:44.539668  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:45:44.548947  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:45:44.558310  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:45:44.558380  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:45:44.567691  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:45:44.576750  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:45:44.576795  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:45:44.586464  213406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:45:44.598983  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:44.718594  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:45.695980  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:45.996480  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:46.072138  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:46.200254  213406 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:45:46.200333  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:46.701083  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:47.201283  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:47.253490  213406 api_server.go:72] duration metric: took 1.053227948s to wait for apiserver process to appear ...
	I0414 17:45:47.253532  213406 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:45:47.253571  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:47.254266  213406 api_server.go:269] stopped: https://192.168.50.199:8443/healthz: Get "https://192.168.50.199:8443/healthz": dial tcp 192.168.50.199:8443: connect: connection refused
	I0414 17:45:47.753924  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:46.704844  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:48.887470  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:50.393514  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:45:50.393621  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:45:50.393644  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:50.433133  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:45:50.433159  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:45:50.753606  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:50.758868  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:45:50.758895  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:45:51.254607  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:51.259648  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:45:51.259677  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:45:51.754419  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:51.762365  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 200:
	ok
	I0414 17:45:51.774330  213406 api_server.go:141] control plane version: v1.32.2
	I0414 17:45:51.774361  213406 api_server.go:131] duration metric: took 4.520816141s to wait for apiserver health ...
	I0414 17:45:51.774374  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:45:51.774383  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:51.775864  213406 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:45:49.648757  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:52.147610  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:50.626885  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:50.627340  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:50.627368  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:50.627294  213733 retry.go:31] will retry after 2.57658473s: waiting for domain to come up
	I0414 17:45:53.207057  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:53.207562  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:53.207590  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:53.207520  213733 retry.go:31] will retry after 3.448748827s: waiting for domain to come up
	I0414 17:45:51.777039  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:45:51.806959  213406 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:45:51.836511  213406 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:45:51.848209  213406 system_pods.go:59] 8 kube-system pods found
	I0414 17:45:51.848270  213406 system_pods.go:61] "coredns-668d6bf9bc-z4n2r" [ee9fd5dc-3f74-4c37-8e96-c5ef30b99046] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:45:51.848284  213406 system_pods.go:61] "etcd-embed-certs-418468" [4622769e-1912-4b04-84c3-5dea86d25184] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 17:45:51.848301  213406 system_pods.go:61] "kube-apiserver-embed-certs-418468" [266cb804-e782-479b-8dac-132b529e46f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 17:45:51.848319  213406 system_pods.go:61] "kube-controller-manager-embed-certs-418468" [ba3c123b-8919-45cc-96aa-cdd449e77762] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 17:45:51.848328  213406 system_pods.go:61] "kube-proxy-6dft2" [f97366b9-4a39-4659-8e3b-c551085e33d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 17:45:51.848340  213406 system_pods.go:61] "kube-scheduler-embed-certs-418468" [12a8ba4d-1e6d-445c-b170-d36f15352271] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 17:45:51.848350  213406 system_pods.go:61] "metrics-server-f79f97bbb-9vnsg" [95cc235a-e21c-4a97-9334-d5030b9097d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:45:51.848359  213406 system_pods.go:61] "storage-provisioner" [c969e5f7-a7dc-441f-b8eb-2c3af1803f32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 17:45:51.848371  213406 system_pods.go:74] duration metric: took 11.836623ms to wait for pod list to return data ...
	I0414 17:45:51.848386  213406 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:45:51.868743  213406 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:45:51.868781  213406 node_conditions.go:123] node cpu capacity is 2
	I0414 17:45:51.868805  213406 node_conditions.go:105] duration metric: took 20.412892ms to run NodePressure ...
	I0414 17:45:51.868835  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:52.239201  213406 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 17:45:52.242855  213406 kubeadm.go:739] kubelet initialised
	I0414 17:45:52.242878  213406 kubeadm.go:740] duration metric: took 3.647876ms waiting for restarted kubelet to initialise ...
	I0414 17:45:52.242889  213406 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:45:52.245160  213406 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:51.386891  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:53.895571  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:54.645821  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.646257  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.658750  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.659197  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has current primary IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.659235  213635 main.go:141] libmachine: (old-k8s-version-768580) found domain IP: 192.168.72.58
	I0414 17:45:56.659245  213635 main.go:141] libmachine: (old-k8s-version-768580) reserving static IP address...
	I0414 17:45:56.659616  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.659642  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | skip adding static IP to network mk-old-k8s-version-768580 - found existing host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"}
	I0414 17:45:56.659654  213635 main.go:141] libmachine: (old-k8s-version-768580) reserved static IP address 192.168.72.58 for domain old-k8s-version-768580
	I0414 17:45:56.659671  213635 main.go:141] libmachine: (old-k8s-version-768580) waiting for SSH...
	I0414 17:45:56.659708  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Getting to WaitForSSH function...
	I0414 17:45:56.661714  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.662056  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.662087  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.662202  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH client type: external
	I0414 17:45:56.662226  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa (-rw-------)
	I0414 17:45:56.662273  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:45:56.662292  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | About to run SSH command:
	I0414 17:45:56.662309  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | exit 0
	I0414 17:45:56.781680  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | SSH cmd err, output: <nil>: 
	I0414 17:45:56.782109  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:45:56.782751  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:56.785158  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.785469  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.785502  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.785736  213635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:45:56.785961  213635 machine.go:93] provisionDockerMachine start ...
	I0414 17:45:56.785980  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:56.786175  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:56.788189  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.788560  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.788585  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.788720  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:56.788874  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.789008  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.789162  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:56.789316  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:56.789519  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:56.789529  213635 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:45:56.890137  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 17:45:56.890168  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:56.890394  213635 buildroot.go:166] provisioning hostname "old-k8s-version-768580"
	I0414 17:45:56.890418  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:56.890619  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:56.892966  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.893390  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.893410  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.893563  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:56.893750  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.893919  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.894061  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:56.894207  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:56.894529  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:56.894549  213635 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-768580 && echo "old-k8s-version-768580" | sudo tee /etc/hostname
	I0414 17:45:57.008447  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-768580
	
	I0414 17:45:57.008471  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.011111  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.011428  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.011469  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.011584  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.011804  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.011985  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.012096  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.012205  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:57.012392  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:57.012407  213635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-768580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-768580/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-768580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:45:57.132689  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:45:57.132739  213635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:45:57.132763  213635 buildroot.go:174] setting up certificates
	I0414 17:45:57.132773  213635 provision.go:84] configureAuth start
	I0414 17:45:57.132784  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:57.133116  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:57.136014  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.136345  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.136374  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.136550  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.139546  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.140028  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.140059  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.140266  213635 provision.go:143] copyHostCerts
	I0414 17:45:57.140335  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:45:57.140361  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:45:57.140462  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:45:57.140589  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:45:57.140603  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:45:57.140655  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:45:57.140743  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:45:57.140761  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:45:57.140798  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:45:57.140884  213635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-768580 san=[127.0.0.1 192.168.72.58 localhost minikube old-k8s-version-768580]
	I0414 17:45:57.638227  213635 provision.go:177] copyRemoteCerts
	I0414 17:45:57.638317  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:45:57.638348  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.641173  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.641530  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.641563  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.641714  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.641916  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.642092  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.642232  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:57.724240  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:45:57.749634  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 17:45:57.776416  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 17:45:57.801692  213635 provision.go:87] duration metric: took 668.902854ms to configureAuth
	I0414 17:45:57.801722  213635 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:45:57.801958  213635 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:45:57.802054  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.804673  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.805023  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.805051  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.805250  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.805434  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.805597  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.805715  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.805892  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:57.806134  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:57.806153  213635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:45:58.022403  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:45:58.022437  213635 machine.go:96] duration metric: took 1.236460782s to provisionDockerMachine
	I0414 17:45:58.022452  213635 start.go:293] postStartSetup for "old-k8s-version-768580" (driver="kvm2")
	I0414 17:45:58.022466  213635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:45:58.022505  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.022841  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:45:58.022875  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.025802  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.026223  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.026254  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.026507  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.026657  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.026765  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.026909  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.112706  213635 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:45:58.117225  213635 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:45:58.117253  213635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:45:58.117324  213635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:45:58.117416  213635 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:45:58.117503  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:45:58.128036  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:58.152497  213635 start.go:296] duration metric: took 130.019138ms for postStartSetup
	I0414 17:45:58.152538  213635 fix.go:56] duration metric: took 19.889962017s for fixHost
	I0414 17:45:58.152587  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.155565  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.156016  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.156050  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.156233  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.156440  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.156667  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.156863  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.157079  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:58.157365  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:58.157380  213635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:45:58.262578  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652758.231554158
	
	I0414 17:45:58.262603  213635 fix.go:216] guest clock: 1744652758.231554158
	I0414 17:45:58.262612  213635 fix.go:229] Guest: 2025-04-14 17:45:58.231554158 +0000 UTC Remote: 2025-04-14 17:45:58.152542501 +0000 UTC m=+34.908827189 (delta=79.011657ms)
	I0414 17:45:58.262635  213635 fix.go:200] guest clock delta is within tolerance: 79.011657ms
	I0414 17:45:58.262641  213635 start.go:83] releasing machines lock for "old-k8s-version-768580", held for 20.000092548s
	I0414 17:45:58.262660  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.262963  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:58.265585  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.265964  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.266004  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.266157  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266649  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266849  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266978  213635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:45:58.267030  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.267047  213635 ssh_runner.go:195] Run: cat /version.json
	I0414 17:45:58.267073  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.269647  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.269715  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270071  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.270098  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270124  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.270157  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270238  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.270344  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.270424  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.270497  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.270566  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.270678  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.270730  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.270836  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:54.250565  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.250955  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.251402  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.343285  213635 ssh_runner.go:195] Run: systemctl --version
	I0414 17:45:58.367988  213635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:45:58.519539  213635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:45:58.526018  213635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:45:58.526083  213635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:45:58.542624  213635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:45:58.542648  213635 start.go:495] detecting cgroup driver to use...
	I0414 17:45:58.542718  213635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:45:58.558731  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:45:58.572169  213635 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:45:58.572211  213635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:45:58.585163  213635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:45:58.598940  213635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:45:58.721667  213635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:45:58.879281  213635 docker.go:233] disabling docker service ...
	I0414 17:45:58.879343  213635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:45:58.896126  213635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:45:58.908836  213635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:45:59.033428  213635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:45:59.166628  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:45:59.181684  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:45:59.200617  213635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 17:45:59.200680  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.211541  213635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:45:59.211600  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.223657  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.235487  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.248000  213635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:45:59.261365  213635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:45:59.273037  213635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:45:59.273132  213635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:45:59.288901  213635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:45:59.300042  213635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:59.423635  213635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:45:59.529685  213635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:45:59.529758  213635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:45:59.534592  213635 start.go:563] Will wait 60s for crictl version
	I0414 17:45:59.534640  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:45:59.538651  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:45:59.578522  213635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:45:59.578595  213635 ssh_runner.go:195] Run: crio --version
	I0414 17:45:59.605740  213635 ssh_runner.go:195] Run: crio --version
	I0414 17:45:59.635045  213635 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 17:45:56.385712  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.386662  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:00.388088  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.647473  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:01.146666  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:59.636069  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:59.638462  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:59.638803  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:59.638829  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:59.639064  213635 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 17:45:59.643370  213635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:59.657222  213635 kubeadm.go:883] updating cluster {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:45:59.657362  213635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:45:59.657409  213635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:59.704172  213635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:45:59.704247  213635 ssh_runner.go:195] Run: which lz4
	I0414 17:45:59.708554  213635 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:45:59.712850  213635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:45:59.712882  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 17:46:01.354039  213635 crio.go:462] duration metric: took 1.645520081s to copy over tarball
	I0414 17:46:01.354112  213635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:45:59.252026  213406 pod_ready.go:93] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"True"
	I0414 17:45:59.252050  213406 pod_ready.go:82] duration metric: took 7.006866592s for pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.252074  213406 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.255615  213406 pod_ready.go:93] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:45:59.255638  213406 pod_ready.go:82] duration metric: took 3.555461ms for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.255649  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:01.263173  213406 pod_ready.go:103] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:02.887635  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:05.387807  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:03.646378  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:05.647729  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:08.146880  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:04.261653  213635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.907516994s)
	I0414 17:46:04.261683  213635 crio.go:469] duration metric: took 2.907610683s to extract the tarball
	I0414 17:46:04.261695  213635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:46:04.307964  213635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:46:04.345077  213635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:46:04.345112  213635 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 17:46:04.345199  213635 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:04.345203  213635 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.345239  213635 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.345249  213635 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 17:46:04.345318  213635 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.345321  213635 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.345209  213635 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.345436  213635 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.347103  213635 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.347115  213635 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.347128  213635 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.347132  213635 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.347093  213635 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.347109  213635 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.347093  213635 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 17:46:04.347164  213635 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:04.489472  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.490905  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.494468  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.498887  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.499207  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.503007  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.528129  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 17:46:04.591926  213635 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 17:46:04.591983  213635 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.592033  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.628524  213635 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 17:46:04.628568  213635 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.628604  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691347  213635 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 17:46:04.691455  213635 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.691347  213635 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 17:46:04.691571  213635 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.691392  213635 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 17:46:04.691634  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691661  213635 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.691393  213635 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 17:46:04.691706  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691731  213635 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.691759  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691509  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.696665  213635 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 17:46:04.696697  213635 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 17:46:04.696714  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.696727  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.696730  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.707222  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.707277  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.709851  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.710042  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.834502  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:04.834653  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.834668  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.856960  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.857034  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.857094  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.857179  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.983051  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.983060  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.983060  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:05.024632  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:05.024779  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:05.031272  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:05.031399  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:05.161869  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 17:46:05.170557  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 17:46:05.170702  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:05.195041  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 17:46:05.195041  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 17:46:05.208270  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 17:46:05.208341  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 17:46:05.220290  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 17:46:05.331240  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:05.471903  213635 cache_images.go:92] duration metric: took 1.126766183s to LoadCachedImages
	W0414 17:46:05.471974  213635 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0414 17:46:05.471985  213635 kubeadm.go:934] updating node { 192.168.72.58 8443 v1.20.0 crio true true} ...
	I0414 17:46:05.472082  213635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-768580 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:46:05.472172  213635 ssh_runner.go:195] Run: crio config
	I0414 17:46:05.531642  213635 cni.go:84] Creating CNI manager for ""
	I0414 17:46:05.531667  213635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:46:05.531678  213635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:46:05.531697  213635 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.58 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-768580 NodeName:old-k8s-version-768580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 17:46:05.531815  213635 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-768580"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:46:05.531897  213635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 17:46:05.542769  213635 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:46:05.542861  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:46:05.552930  213635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0414 17:46:05.570087  213635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:46:05.588483  213635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 17:46:05.606443  213635 ssh_runner.go:195] Run: grep 192.168.72.58	control-plane.minikube.internal$ /etc/hosts
	I0414 17:46:05.610756  213635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:46:05.622873  213635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:46:05.770402  213635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:46:05.789353  213635 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580 for IP: 192.168.72.58
	I0414 17:46:05.789374  213635 certs.go:194] generating shared ca certs ...
	I0414 17:46:05.789395  213635 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:46:05.789542  213635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:46:05.789598  213635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:46:05.789613  213635 certs.go:256] generating profile certs ...
	I0414 17:46:05.789717  213635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.key
	I0414 17:46:05.789816  213635 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key.0f5f550a
	I0414 17:46:05.789911  213635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key
	I0414 17:46:05.790030  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:46:05.790067  213635 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:46:05.790077  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:46:05.790130  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:46:05.790163  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:46:05.790195  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:46:05.790251  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:46:05.790829  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:46:05.852348  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:46:05.879909  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:46:05.924274  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:46:05.968318  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 17:46:06.004046  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:46:06.039672  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:46:06.068041  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 17:46:06.093159  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:46:06.118949  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:46:06.144480  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:46:06.171159  213635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:46:06.189499  213635 ssh_runner.go:195] Run: openssl version
	I0414 17:46:06.196060  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:46:06.206864  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.211352  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.211407  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.217759  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:46:06.228546  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:46:06.239146  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.243457  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.243511  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.249141  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:46:06.259582  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:46:06.269988  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.275271  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.275324  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.282428  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:46:06.293404  213635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:46:06.298115  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:46:06.304513  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:46:06.310675  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:46:06.317218  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:46:06.324114  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:46:06.331759  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:46:06.337898  213635 kubeadm.go:392] StartCluster: {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:46:06.337991  213635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:46:06.338037  213635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:46:06.381282  213635 cri.go:89] found id: ""
	I0414 17:46:06.381351  213635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:46:06.392326  213635 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 17:46:06.392345  213635 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 17:46:06.392385  213635 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 17:46:06.402275  213635 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:46:06.403224  213635 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-768580" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:46:06.403594  213635 kubeconfig.go:62] /home/jenkins/minikube-integration/20349-149500/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-768580" cluster setting kubeconfig missing "old-k8s-version-768580" context setting]
	I0414 17:46:06.404086  213635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:46:06.460048  213635 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 17:46:06.470500  213635 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.58
	I0414 17:46:06.470535  213635 kubeadm.go:1160] stopping kube-system containers ...
	I0414 17:46:06.470546  213635 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 17:46:06.470624  213635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:46:06.509152  213635 cri.go:89] found id: ""
	I0414 17:46:06.509210  213635 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:46:06.526163  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:46:06.535901  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:46:06.535928  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:46:06.535978  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:46:06.545480  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:46:06.545535  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:46:06.554610  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:46:06.563294  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:46:06.563347  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:46:06.572284  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:46:06.581431  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:46:06.581475  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:46:06.591211  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:46:06.600340  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:46:06.600408  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:46:06.609494  213635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:46:06.618800  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:06.747191  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.478890  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.697670  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.793179  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.893891  213635 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:46:07.893971  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:03.762310  213406 pod_ready.go:103] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:04.762763  213406 pod_ready.go:93] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.762794  213406 pod_ready.go:82] duration metric: took 5.507135949s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.762808  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.767311  213406 pod_ready.go:93] pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.767329  213406 pod_ready.go:82] duration metric: took 4.514084ms for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.767337  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6dft2" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.771924  213406 pod_ready.go:93] pod "kube-proxy-6dft2" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.771944  213406 pod_ready.go:82] duration metric: took 4.599852ms for pod "kube-proxy-6dft2" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.771954  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.776235  213406 pod_ready.go:93] pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.776251  213406 pod_ready.go:82] duration metric: took 4.290311ms for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.776264  213406 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:06.782241  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:07.388743  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:09.886293  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:10.645757  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:12.646190  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:08.394410  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:08.895002  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.395022  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:10.394996  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:10.894824  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:11.394638  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:11.894428  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:12.394452  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:12.894017  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.281824  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:11.282179  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:11.886469  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:13.886515  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:15.146498  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:17.147156  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:13.394405  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:13.894519  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:14.394847  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:14.894997  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:15.394630  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:15.895007  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:16.394831  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:16.894632  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:17.395016  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:17.894993  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:13.783938  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:16.282525  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:16.387995  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:18.887504  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:19.645731  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:21.645945  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:18.394976  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:18.895068  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:19.394434  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:19.894886  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:20.395037  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:20.895061  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:21.394429  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:21.894500  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:22.394822  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:22.895080  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:18.782119  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:20.785464  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.281701  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:21.387824  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.886390  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:24.145922  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:26.645858  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.394953  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:23.894339  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:24.395018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:24.895037  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.394854  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.894984  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:26.395005  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:26.895007  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:27.395035  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:27.895034  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.282520  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:27.780903  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:26.386775  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.886919  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.646216  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:30.646635  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.146515  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.394580  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:28.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.394479  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.894485  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:30.394483  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:30.894471  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:31.395020  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:31.895014  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:32.395034  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:32.895028  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.782338  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:32.280971  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:31.389561  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.885891  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:35.646041  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.146195  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.394018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:33.894501  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.394226  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.894064  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:35.394952  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:35.895016  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:36.394607  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:36.895006  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:37.394673  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:37.894995  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.282968  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:36.781804  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:35.886870  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.385985  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:40.386210  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:40.646578  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.146373  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.394272  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:38.894875  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:39.394148  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:39.895036  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:40.394685  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:40.895010  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:41.394981  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:41.894634  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:42.394270  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:42.895029  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:38.783097  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:41.281604  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.281689  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:42.387307  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:44.885815  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:45.646331  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.146832  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.394362  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:43.894756  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:44.395057  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:44.895022  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.394470  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.894701  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:46.395033  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:46.895033  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:47.394321  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:47.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.781213  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:47.782055  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:46.886132  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.887731  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:50.646089  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:52.646393  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.394554  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:48.894703  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.394432  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.894498  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:50.395063  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:50.894449  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:51.395000  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:51.895026  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:52.394891  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:52.894471  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.782883  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:52.282500  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:51.386370  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:53.387056  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:55.387096  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:55.145864  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:57.145973  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:53.394778  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:53.894664  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.394089  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.894622  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:55.394495  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:55.894999  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:56.395001  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:56.894095  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:57.394283  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:57.894977  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.282957  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:56.781374  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:57.887077  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:00.386841  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:59.146801  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:01.645801  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:58.394681  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:58.895019  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:59.394738  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:59.894984  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:00.394802  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:00.894854  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:01.395049  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:01.895019  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:02.394977  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:02.894501  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:58.782051  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:00.782255  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:02.782525  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:02.886126  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:04.886471  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:03.646142  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:06.146967  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:03.394365  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:03.895039  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:04.395027  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:04.894987  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:05.394716  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:05.894080  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:06.394955  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:06.894670  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:07.394902  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:07.894929  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:07.895008  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:07.936773  213635 cri.go:89] found id: ""
	I0414 17:47:07.936809  213635 logs.go:282] 0 containers: []
	W0414 17:47:07.936822  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:07.936830  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:07.936908  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:07.971073  213635 cri.go:89] found id: ""
	I0414 17:47:07.971104  213635 logs.go:282] 0 containers: []
	W0414 17:47:07.971113  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:07.971118  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:07.971171  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:08.010389  213635 cri.go:89] found id: ""
	I0414 17:47:08.010414  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.010422  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:08.010427  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:08.010482  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:08.044286  213635 cri.go:89] found id: ""
	I0414 17:47:08.044322  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.044334  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:08.044344  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:08.044413  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:08.079985  213635 cri.go:89] found id: ""
	I0414 17:47:08.080008  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.080016  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:08.080021  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:08.080071  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:08.119431  213635 cri.go:89] found id: ""
	I0414 17:47:08.119456  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.119468  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:08.119474  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:08.119529  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:08.152203  213635 cri.go:89] found id: ""
	I0414 17:47:08.152227  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.152234  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:08.152239  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:08.152287  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:08.187035  213635 cri.go:89] found id: ""
	I0414 17:47:08.187064  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.187075  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:08.187092  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:08.187106  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0414 17:47:05.283544  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:07.781984  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:06.887145  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:09.386391  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:08.645957  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:10.646258  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:13.147462  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	W0414 17:47:08.312274  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:08.312301  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:08.312315  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:08.382714  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:08.382745  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:08.421561  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:08.421588  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:08.476855  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:08.476891  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:10.991104  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:11.004501  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:11.004575  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:11.039060  213635 cri.go:89] found id: ""
	I0414 17:47:11.039086  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.039094  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:11.039099  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:11.039145  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:11.073857  213635 cri.go:89] found id: ""
	I0414 17:47:11.073883  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.073890  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:11.073896  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:11.073942  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:11.106411  213635 cri.go:89] found id: ""
	I0414 17:47:11.106436  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.106493  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:11.106505  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:11.106550  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:11.145377  213635 cri.go:89] found id: ""
	I0414 17:47:11.145406  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.145416  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:11.145423  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:11.145481  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:11.178621  213635 cri.go:89] found id: ""
	I0414 17:47:11.178650  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.178661  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:11.178668  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:11.178731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:11.212798  213635 cri.go:89] found id: ""
	I0414 17:47:11.212832  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.212840  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:11.212846  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:11.212902  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:11.258553  213635 cri.go:89] found id: ""
	I0414 17:47:11.258576  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.258584  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:11.258589  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:11.258637  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:11.318616  213635 cri.go:89] found id: ""
	I0414 17:47:11.318658  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.318669  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:11.318680  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:11.318695  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:11.381468  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:11.381500  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:11.395975  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:11.395999  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:11.468932  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:11.468954  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:11.468971  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:11.547431  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:11.547464  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:10.281538  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:12.284013  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:11.386803  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:13.387771  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:15.645939  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:17.647578  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:14.089096  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:14.105644  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:14.105710  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:14.139763  213635 cri.go:89] found id: ""
	I0414 17:47:14.139791  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.139798  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:14.139804  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:14.139866  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:14.174571  213635 cri.go:89] found id: ""
	I0414 17:47:14.174594  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.174600  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:14.174605  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:14.174659  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:14.208140  213635 cri.go:89] found id: ""
	I0414 17:47:14.208164  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.208171  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:14.208177  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:14.208233  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:14.240906  213635 cri.go:89] found id: ""
	I0414 17:47:14.240940  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.240952  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:14.240959  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:14.241023  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:14.273549  213635 cri.go:89] found id: ""
	I0414 17:47:14.273581  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.273593  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:14.273599  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:14.273652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:14.308758  213635 cri.go:89] found id: ""
	I0414 17:47:14.308791  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.308798  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:14.308805  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:14.308868  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:14.343464  213635 cri.go:89] found id: ""
	I0414 17:47:14.343492  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.343503  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:14.343510  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:14.343571  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:14.377456  213635 cri.go:89] found id: ""
	I0414 17:47:14.377483  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.377493  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:14.377503  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:14.377517  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:14.428031  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:14.428059  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:14.441682  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:14.441706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:14.511433  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:14.511456  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:14.511470  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:14.591334  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:14.591373  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:17.131067  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:17.150199  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:17.150257  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:17.195868  213635 cri.go:89] found id: ""
	I0414 17:47:17.195895  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.195902  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:17.195909  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:17.195968  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:17.248530  213635 cri.go:89] found id: ""
	I0414 17:47:17.248562  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.248573  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:17.248600  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:17.248664  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:17.302561  213635 cri.go:89] found id: ""
	I0414 17:47:17.302592  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.302603  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:17.302611  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:17.302676  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:17.337154  213635 cri.go:89] found id: ""
	I0414 17:47:17.337185  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.337196  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:17.337204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:17.337262  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:17.372117  213635 cri.go:89] found id: ""
	I0414 17:47:17.372142  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.372149  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:17.372154  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:17.372209  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:17.409162  213635 cri.go:89] found id: ""
	I0414 17:47:17.409190  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.409199  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:17.409204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:17.409253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:17.444609  213635 cri.go:89] found id: ""
	I0414 17:47:17.444636  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.444652  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:17.444660  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:17.444721  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:17.484188  213635 cri.go:89] found id: ""
	I0414 17:47:17.484216  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.484226  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:17.484238  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:17.484252  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:17.523203  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:17.523249  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:17.573785  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:17.573818  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:17.586989  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:17.587014  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:17.659369  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:17.659392  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:17.659408  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:14.781454  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:16.782152  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:15.888032  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:18.387319  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.147048  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:22.646239  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.241973  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:20.255211  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:20.255288  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:20.292821  213635 cri.go:89] found id: ""
	I0414 17:47:20.292854  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.292866  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:20.292873  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:20.292933  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:20.331101  213635 cri.go:89] found id: ""
	I0414 17:47:20.331150  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.331162  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:20.331169  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:20.331247  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:20.369990  213635 cri.go:89] found id: ""
	I0414 17:47:20.370015  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.370022  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:20.370027  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:20.370096  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:20.406805  213635 cri.go:89] found id: ""
	I0414 17:47:20.406836  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.406846  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:20.406852  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:20.406913  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:20.442314  213635 cri.go:89] found id: ""
	I0414 17:47:20.442340  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.442348  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:20.442353  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:20.442413  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:20.476588  213635 cri.go:89] found id: ""
	I0414 17:47:20.476617  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.476627  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:20.476634  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:20.476695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:20.510731  213635 cri.go:89] found id: ""
	I0414 17:47:20.510782  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.510821  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:20.510833  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:20.510906  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:20.545219  213635 cri.go:89] found id: ""
	I0414 17:47:20.545244  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.545255  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:20.545277  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:20.545292  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:20.583147  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:20.583180  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:20.636347  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:20.636382  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:20.650452  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:20.650477  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:20.722784  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:20.722811  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:20.722828  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:19.282759  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:21.782197  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.886279  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:22.886745  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:24.886852  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:25.145867  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:27.146656  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:23.298966  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:23.312159  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:23.312251  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:23.353883  213635 cri.go:89] found id: ""
	I0414 17:47:23.353907  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.353915  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:23.353921  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:23.354005  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:23.391644  213635 cri.go:89] found id: ""
	I0414 17:47:23.391671  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.391680  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:23.391688  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:23.391732  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:23.427612  213635 cri.go:89] found id: ""
	I0414 17:47:23.427644  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.427652  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:23.427658  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:23.427719  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:23.463296  213635 cri.go:89] found id: ""
	I0414 17:47:23.463324  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.463335  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:23.463344  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:23.463408  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:23.497377  213635 cri.go:89] found id: ""
	I0414 17:47:23.497407  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.497418  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:23.497426  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:23.497487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:23.534162  213635 cri.go:89] found id: ""
	I0414 17:47:23.534209  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.534222  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:23.534229  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:23.534299  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:23.574494  213635 cri.go:89] found id: ""
	I0414 17:47:23.574524  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.574535  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:23.574542  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:23.574611  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:23.612210  213635 cri.go:89] found id: ""
	I0414 17:47:23.612265  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.612279  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:23.612289  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:23.612304  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:23.689765  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:23.689802  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:23.725675  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:23.725709  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:23.778002  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:23.778031  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:23.793019  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:23.793052  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:23.866451  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:26.367039  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:26.381917  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:26.381987  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:26.416638  213635 cri.go:89] found id: ""
	I0414 17:47:26.416661  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.416668  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:26.416674  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:26.416721  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:26.458324  213635 cri.go:89] found id: ""
	I0414 17:47:26.458349  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.458360  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:26.458367  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:26.458423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:26.493044  213635 cri.go:89] found id: ""
	I0414 17:47:26.493096  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.493109  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:26.493116  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:26.493181  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:26.527654  213635 cri.go:89] found id: ""
	I0414 17:47:26.527690  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.527702  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:26.527709  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:26.527769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:26.565607  213635 cri.go:89] found id: ""
	I0414 17:47:26.565633  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.565639  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:26.565645  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:26.565692  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:26.598157  213635 cri.go:89] found id: ""
	I0414 17:47:26.598186  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.598196  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:26.598204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:26.598264  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:26.631534  213635 cri.go:89] found id: ""
	I0414 17:47:26.631572  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.631581  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:26.631586  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:26.631652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:26.669109  213635 cri.go:89] found id: ""
	I0414 17:47:26.669134  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.669145  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:26.669155  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:26.669169  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:26.722048  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:26.722075  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:26.735141  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:26.735160  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:26.808950  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:26.808979  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:26.808996  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:26.896662  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:26.896693  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:23.785953  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:26.284260  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:27.386201  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.386726  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.146828  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.646619  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.440079  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:29.454761  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:29.454837  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:29.488451  213635 cri.go:89] found id: ""
	I0414 17:47:29.488480  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.488491  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:29.488499  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:29.488548  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:29.520861  213635 cri.go:89] found id: ""
	I0414 17:47:29.520891  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.520902  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:29.520908  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:29.520963  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:29.557913  213635 cri.go:89] found id: ""
	I0414 17:47:29.557939  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.557949  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:29.557956  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:29.558013  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:29.596839  213635 cri.go:89] found id: ""
	I0414 17:47:29.596878  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.596889  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:29.596896  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:29.596959  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:29.631746  213635 cri.go:89] found id: ""
	I0414 17:47:29.631779  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.631789  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:29.631797  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:29.631864  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:29.667006  213635 cri.go:89] found id: ""
	I0414 17:47:29.667034  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.667048  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:29.667055  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:29.667111  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:29.700458  213635 cri.go:89] found id: ""
	I0414 17:47:29.700490  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.700500  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:29.700507  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:29.700569  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:29.736776  213635 cri.go:89] found id: ""
	I0414 17:47:29.736804  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.736814  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:29.736825  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:29.736840  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:29.776831  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:29.776871  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:29.830601  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:29.830632  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:29.844366  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:29.844396  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:29.920571  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:29.920595  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:29.920611  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:32.502415  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:32.516740  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:32.516806  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:32.551360  213635 cri.go:89] found id: ""
	I0414 17:47:32.551380  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.551387  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:32.551393  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:32.551440  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:32.588757  213635 cri.go:89] found id: ""
	I0414 17:47:32.588785  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.588795  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:32.588802  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:32.588869  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:32.622369  213635 cri.go:89] found id: ""
	I0414 17:47:32.622394  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.622405  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:32.622413  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:32.622473  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:32.658310  213635 cri.go:89] found id: ""
	I0414 17:47:32.658334  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.658343  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:32.658350  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:32.658408  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:32.692724  213635 cri.go:89] found id: ""
	I0414 17:47:32.692756  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.692768  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:32.692776  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:32.692836  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:32.729086  213635 cri.go:89] found id: ""
	I0414 17:47:32.729113  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.729121  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:32.729127  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:32.729182  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:32.761853  213635 cri.go:89] found id: ""
	I0414 17:47:32.761878  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.761886  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:32.761891  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:32.761937  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:32.794906  213635 cri.go:89] found id: ""
	I0414 17:47:32.794931  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.794938  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:32.794945  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:32.794956  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:32.876985  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:32.877027  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:32.915184  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:32.915210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:32.965418  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:32.965449  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:32.978245  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:32.978270  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:33.046592  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:28.782031  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.281960  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:33.283783  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.885919  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:34.385966  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:34.146066  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:36.645902  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:35.547721  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:35.562729  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:35.562794  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:35.600323  213635 cri.go:89] found id: ""
	I0414 17:47:35.600353  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.600365  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:35.600374  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:35.600426  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:35.639091  213635 cri.go:89] found id: ""
	I0414 17:47:35.639116  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.639124  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:35.639130  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:35.639185  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:35.674709  213635 cri.go:89] found id: ""
	I0414 17:47:35.674743  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.674755  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:35.674763  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:35.674825  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:35.712316  213635 cri.go:89] found id: ""
	I0414 17:47:35.712340  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.712347  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:35.712353  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:35.712399  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:35.746497  213635 cri.go:89] found id: ""
	I0414 17:47:35.746525  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.746535  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:35.746542  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:35.746611  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:35.787414  213635 cri.go:89] found id: ""
	I0414 17:47:35.787436  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.787445  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:35.787460  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:35.787514  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:35.818830  213635 cri.go:89] found id: ""
	I0414 17:47:35.818857  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.818867  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:35.818874  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:35.818938  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:35.854020  213635 cri.go:89] found id: ""
	I0414 17:47:35.854048  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.854059  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:35.854082  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:35.854095  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:35.907502  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:35.907530  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:35.922223  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:35.922248  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:35.992058  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:35.992085  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:35.992101  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:36.070377  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:36.070413  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:35.782944  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.283160  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:36.388560  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.886997  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.647280  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:41.146882  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.612483  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:38.625570  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:38.625639  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:38.664060  213635 cri.go:89] found id: ""
	I0414 17:47:38.664084  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.664104  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:38.664112  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:38.664168  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:38.698505  213635 cri.go:89] found id: ""
	I0414 17:47:38.698535  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.698546  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:38.698553  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:38.698614  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:38.735113  213635 cri.go:89] found id: ""
	I0414 17:47:38.735142  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.735153  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:38.735161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:38.735229  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:38.773173  213635 cri.go:89] found id: ""
	I0414 17:47:38.773203  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.773211  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:38.773216  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:38.773270  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:38.807136  213635 cri.go:89] found id: ""
	I0414 17:47:38.807167  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.807178  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:38.807186  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:38.807244  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:38.844350  213635 cri.go:89] found id: ""
	I0414 17:47:38.844375  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.844384  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:38.844392  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:38.844445  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:38.879565  213635 cri.go:89] found id: ""
	I0414 17:47:38.879587  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.879594  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:38.879599  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:38.879658  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:38.916412  213635 cri.go:89] found id: ""
	I0414 17:47:38.916449  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.916457  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:38.916465  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:38.916475  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:38.953944  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:38.953972  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:39.004989  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:39.005019  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:39.018618  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:39.018640  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:39.091095  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:39.091122  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:39.091136  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:41.675012  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:41.689023  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:41.689085  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:41.722675  213635 cri.go:89] found id: ""
	I0414 17:47:41.722698  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.722707  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:41.722715  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:41.722774  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:41.757787  213635 cri.go:89] found id: ""
	I0414 17:47:41.757808  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.757815  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:41.757822  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:41.757895  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:41.792938  213635 cri.go:89] found id: ""
	I0414 17:47:41.792970  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.792981  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:41.792990  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:41.793060  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:41.826121  213635 cri.go:89] found id: ""
	I0414 17:47:41.826145  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.826153  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:41.826158  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:41.826206  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:41.862687  213635 cri.go:89] found id: ""
	I0414 17:47:41.862717  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.862728  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:41.862735  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:41.862810  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:41.901905  213635 cri.go:89] found id: ""
	I0414 17:47:41.901935  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.901945  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:41.901953  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:41.902010  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:41.936560  213635 cri.go:89] found id: ""
	I0414 17:47:41.936591  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.936602  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:41.936609  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:41.936673  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:41.968609  213635 cri.go:89] found id: ""
	I0414 17:47:41.968640  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.968651  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:41.968663  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:41.968677  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:42.037691  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:42.037725  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:42.037742  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:42.123173  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:42.123222  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:42.164982  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:42.165018  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:42.217567  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:42.217601  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:40.283210  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:42.286058  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:40.887506  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:43.387362  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:43.646155  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:46.145968  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:48.147182  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:44.733645  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:44.748083  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:44.748144  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:44.782103  213635 cri.go:89] found id: ""
	I0414 17:47:44.782131  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.782141  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:44.782148  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:44.782200  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:44.825594  213635 cri.go:89] found id: ""
	I0414 17:47:44.825640  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.825652  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:44.825659  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:44.825719  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:44.858967  213635 cri.go:89] found id: ""
	I0414 17:47:44.859000  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.859017  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:44.859024  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:44.859088  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:44.892965  213635 cri.go:89] found id: ""
	I0414 17:47:44.892990  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.892999  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:44.893007  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:44.893073  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:44.926983  213635 cri.go:89] found id: ""
	I0414 17:47:44.927007  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.927014  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:44.927019  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:44.927066  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:44.961406  213635 cri.go:89] found id: ""
	I0414 17:47:44.961459  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.961471  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:44.961478  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:44.961540  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:44.996262  213635 cri.go:89] found id: ""
	I0414 17:47:44.996287  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.996296  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:44.996304  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:44.996368  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:45.029476  213635 cri.go:89] found id: ""
	I0414 17:47:45.029507  213635 logs.go:282] 0 containers: []
	W0414 17:47:45.029518  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:45.029529  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:45.029543  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:45.100081  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:45.100110  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:45.100122  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:45.179286  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:45.179319  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:45.220129  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:45.220166  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:45.275257  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:45.275292  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:47.792170  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:47.805709  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:47.805769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:47.842023  213635 cri.go:89] found id: ""
	I0414 17:47:47.842050  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.842058  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:47.842063  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:47.842118  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:47.884228  213635 cri.go:89] found id: ""
	I0414 17:47:47.884260  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.884271  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:47.884278  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:47.884338  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:47.924093  213635 cri.go:89] found id: ""
	I0414 17:47:47.924121  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.924130  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:47.924137  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:47.924193  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:47.965378  213635 cri.go:89] found id: ""
	I0414 17:47:47.965406  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.965416  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:47.965423  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:47.965538  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:48.003136  213635 cri.go:89] found id: ""
	I0414 17:47:48.003165  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.003178  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:48.003187  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:48.003253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:48.042729  213635 cri.go:89] found id: ""
	I0414 17:47:48.042758  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.042768  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:48.042774  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:48.042837  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:48.077654  213635 cri.go:89] found id: ""
	I0414 17:47:48.077682  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.077692  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:48.077699  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:48.077749  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:48.109967  213635 cri.go:89] found id: ""
	I0414 17:47:48.109991  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.109998  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:48.110006  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:48.110017  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:48.125245  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:48.125277  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:48.194705  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:48.194725  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:48.194738  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:44.783825  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:47.283708  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:45.886120  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:47.886616  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:50.387382  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:50.646377  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:53.145406  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:48.287160  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:48.287196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:48.335515  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:48.335547  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:50.892108  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:50.905172  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:50.905234  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:50.940079  213635 cri.go:89] found id: ""
	I0414 17:47:50.940104  213635 logs.go:282] 0 containers: []
	W0414 17:47:50.940111  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:50.940116  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:50.940176  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:50.973887  213635 cri.go:89] found id: ""
	I0414 17:47:50.973912  213635 logs.go:282] 0 containers: []
	W0414 17:47:50.973919  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:50.973926  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:50.973982  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:51.012547  213635 cri.go:89] found id: ""
	I0414 17:47:51.012568  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.012577  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:51.012584  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:51.012640  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:51.053157  213635 cri.go:89] found id: ""
	I0414 17:47:51.053180  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.053188  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:51.053196  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:51.053249  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:51.110289  213635 cri.go:89] found id: ""
	I0414 17:47:51.110319  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.110330  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:51.110337  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:51.110393  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:51.144361  213635 cri.go:89] found id: ""
	I0414 17:47:51.144383  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.144394  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:51.144402  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:51.144530  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:51.177527  213635 cri.go:89] found id: ""
	I0414 17:47:51.177563  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.177571  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:51.177576  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:51.177636  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:51.210869  213635 cri.go:89] found id: ""
	I0414 17:47:51.210891  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.210899  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:51.210907  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:51.210918  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:51.247291  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:51.247317  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:51.299677  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:51.299706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:51.313384  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:51.313409  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:51.388212  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:51.388239  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:51.388254  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:49.781341  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:51.782513  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:52.886676  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:55.386338  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:55.145724  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:57.146515  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:53.976114  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:53.989051  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:53.989115  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:54.023756  213635 cri.go:89] found id: ""
	I0414 17:47:54.023788  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.023799  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:54.023805  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:54.023869  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:54.061807  213635 cri.go:89] found id: ""
	I0414 17:47:54.061853  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.061865  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:54.061872  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:54.061930  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:54.095835  213635 cri.go:89] found id: ""
	I0414 17:47:54.095878  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.095890  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:54.095897  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:54.096006  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:54.131513  213635 cri.go:89] found id: ""
	I0414 17:47:54.131535  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.131543  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:54.131548  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:54.131594  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:54.171002  213635 cri.go:89] found id: ""
	I0414 17:47:54.171024  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.171031  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:54.171037  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:54.171095  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:54.206779  213635 cri.go:89] found id: ""
	I0414 17:47:54.206801  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.206808  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:54.206818  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:54.206876  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:54.252485  213635 cri.go:89] found id: ""
	I0414 17:47:54.252533  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.252547  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:54.252555  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:54.252628  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:54.290628  213635 cri.go:89] found id: ""
	I0414 17:47:54.290656  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.290667  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:54.290676  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:54.290689  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:54.364000  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:54.364020  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:54.364032  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:54.446117  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:54.446152  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:54.488749  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:54.488775  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:54.540890  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:54.540922  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:57.055546  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:57.069362  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:57.069420  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:57.112914  213635 cri.go:89] found id: ""
	I0414 17:47:57.112942  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.112949  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:57.112955  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:57.113002  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:57.149533  213635 cri.go:89] found id: ""
	I0414 17:47:57.149553  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.149560  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:57.149565  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:57.149622  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:57.184595  213635 cri.go:89] found id: ""
	I0414 17:47:57.184624  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.184632  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:57.184637  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:57.184683  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:57.219904  213635 cri.go:89] found id: ""
	I0414 17:47:57.219931  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.219942  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:57.219949  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:57.220008  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:57.255709  213635 cri.go:89] found id: ""
	I0414 17:47:57.255736  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.255745  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:57.255750  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:57.255809  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:57.289390  213635 cri.go:89] found id: ""
	I0414 17:47:57.289413  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.289419  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:57.289425  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:57.289474  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:57.329950  213635 cri.go:89] found id: ""
	I0414 17:47:57.329972  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.329978  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:57.329983  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:57.330028  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:57.365856  213635 cri.go:89] found id: ""
	I0414 17:47:57.365888  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.365901  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:57.365911  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:57.365925  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:57.378637  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:57.378661  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:57.446639  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:57.446662  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:57.446676  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:57.536049  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:57.536086  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:57.585473  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:57.585506  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:53.782957  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:56.286401  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:57.387720  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:59.886486  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:59.647389  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:02.147002  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:00.135711  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:00.151060  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:00.151131  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:00.184972  213635 cri.go:89] found id: ""
	I0414 17:48:00.185005  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.185016  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:00.185023  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:00.185088  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:00.218051  213635 cri.go:89] found id: ""
	I0414 17:48:00.218085  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.218093  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:00.218099  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:00.218156  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:00.251291  213635 cri.go:89] found id: ""
	I0414 17:48:00.251318  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.251325  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:00.251331  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:00.251392  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:00.291683  213635 cri.go:89] found id: ""
	I0414 17:48:00.291706  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.291713  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:00.291718  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:00.291765  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:00.329316  213635 cri.go:89] found id: ""
	I0414 17:48:00.329342  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.329350  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:00.329356  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:00.329409  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:00.364819  213635 cri.go:89] found id: ""
	I0414 17:48:00.364848  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.364856  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:00.364861  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:00.364905  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:00.404928  213635 cri.go:89] found id: ""
	I0414 17:48:00.404961  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.404971  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:00.404978  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:00.405040  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:00.439708  213635 cri.go:89] found id: ""
	I0414 17:48:00.439739  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.439750  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:00.439761  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:00.439776  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:00.479252  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:00.479285  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:00.533545  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:00.533576  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:00.546920  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:00.546952  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:00.614440  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:00.614461  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:00.614476  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:03.197930  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:03.212912  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:03.212973  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:03.272435  213635 cri.go:89] found id: ""
	I0414 17:48:03.272467  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.272479  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:03.272487  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:03.272554  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:58.781206  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:00.781677  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.286395  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:01.886559  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.887796  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:04.147694  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:06.647249  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.336351  213635 cri.go:89] found id: ""
	I0414 17:48:03.336373  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.336380  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:03.336386  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:03.336430  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:03.370368  213635 cri.go:89] found id: ""
	I0414 17:48:03.370398  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.370408  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:03.370422  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:03.370475  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:03.408402  213635 cri.go:89] found id: ""
	I0414 17:48:03.408429  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.408436  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:03.408442  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:03.408491  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:03.442912  213635 cri.go:89] found id: ""
	I0414 17:48:03.442939  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.442950  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:03.442957  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:03.443019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:03.479439  213635 cri.go:89] found id: ""
	I0414 17:48:03.479467  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.479476  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:03.479481  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:03.479544  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:03.517971  213635 cri.go:89] found id: ""
	I0414 17:48:03.517993  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.518000  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:03.518005  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:03.518059  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:03.556177  213635 cri.go:89] found id: ""
	I0414 17:48:03.556208  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.556216  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:03.556224  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:03.556237  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:03.594142  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:03.594167  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:03.644688  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:03.644718  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:03.658140  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:03.658164  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:03.729627  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:03.729649  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:03.729663  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:06.309939  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:06.323927  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:06.323990  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:06.364388  213635 cri.go:89] found id: ""
	I0414 17:48:06.364412  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.364426  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:06.364431  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:06.364477  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:06.398800  213635 cri.go:89] found id: ""
	I0414 17:48:06.398821  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.398828  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:06.398833  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:06.398885  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:06.442842  213635 cri.go:89] found id: ""
	I0414 17:48:06.442873  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.442884  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:06.442891  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:06.442973  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:06.485910  213635 cri.go:89] found id: ""
	I0414 17:48:06.485945  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.485955  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:06.485962  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:06.486023  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:06.520624  213635 cri.go:89] found id: ""
	I0414 17:48:06.520656  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.520668  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:06.520675  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:06.520741  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:06.555790  213635 cri.go:89] found id: ""
	I0414 17:48:06.555833  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.555845  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:06.555853  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:06.555916  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:06.589144  213635 cri.go:89] found id: ""
	I0414 17:48:06.589166  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.589173  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:06.589177  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:06.589223  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:06.623771  213635 cri.go:89] found id: ""
	I0414 17:48:06.623808  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.623824  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:06.623833  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:06.623843  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:06.679003  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:06.679039  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:06.695303  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:06.695328  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:06.770562  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:06.770585  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:06.770597  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:06.850617  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:06.850652  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:05.782269  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:07.783336  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:06.387181  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:08.886322  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:09.145702  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:11.147099  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:09.390500  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:09.403827  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:09.403885  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:09.438395  213635 cri.go:89] found id: ""
	I0414 17:48:09.438420  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.438428  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:09.438434  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:09.438484  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:09.473071  213635 cri.go:89] found id: ""
	I0414 17:48:09.473098  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.473106  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:09.473112  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:09.473159  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:09.506175  213635 cri.go:89] found id: ""
	I0414 17:48:09.506205  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.506216  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:09.506223  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:09.506272  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:09.540488  213635 cri.go:89] found id: ""
	I0414 17:48:09.540511  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.540518  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:09.540523  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:09.540583  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:09.576189  213635 cri.go:89] found id: ""
	I0414 17:48:09.576222  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.576233  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:09.576241  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:09.576302  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:09.607908  213635 cri.go:89] found id: ""
	I0414 17:48:09.607937  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.607945  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:09.607950  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:09.608000  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:09.642069  213635 cri.go:89] found id: ""
	I0414 17:48:09.642098  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.642108  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:09.642115  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:09.642177  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:09.675434  213635 cri.go:89] found id: ""
	I0414 17:48:09.675463  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.675474  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:09.675484  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:09.675496  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:09.754118  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:09.754154  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:09.797336  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:09.797373  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:09.849366  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:09.849407  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:09.863427  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:09.863458  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:09.934735  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:12.435482  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:12.449310  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:12.449374  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:12.484115  213635 cri.go:89] found id: ""
	I0414 17:48:12.484143  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.484153  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:12.484160  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:12.484213  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:12.521972  213635 cri.go:89] found id: ""
	I0414 17:48:12.521994  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.522001  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:12.522012  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:12.522071  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:12.554192  213635 cri.go:89] found id: ""
	I0414 17:48:12.554219  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.554229  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:12.554237  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:12.554296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:12.587420  213635 cri.go:89] found id: ""
	I0414 17:48:12.587450  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.587460  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:12.587467  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:12.587526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:12.621562  213635 cri.go:89] found id: ""
	I0414 17:48:12.621588  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.621599  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:12.621608  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:12.621672  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:12.660123  213635 cri.go:89] found id: ""
	I0414 17:48:12.660147  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.660155  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:12.660160  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:12.660216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:12.693979  213635 cri.go:89] found id: ""
	I0414 17:48:12.694010  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.694021  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:12.694029  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:12.694097  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:12.728017  213635 cri.go:89] found id: ""
	I0414 17:48:12.728043  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.728051  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:12.728060  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:12.728072  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:12.782896  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:12.782927  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:12.795655  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:12.795679  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:12.865150  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:12.865183  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:12.865197  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:12.950645  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:12.950682  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:10.285784  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:12.781397  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:10.886362  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:12.888044  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:15.386245  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:13.646393  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:16.146335  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:16.640867  212456 pod_ready.go:82] duration metric: took 4m0.000569834s for pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace to be "Ready" ...
	E0414 17:48:16.640896  212456 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0414 17:48:16.640935  212456 pod_ready.go:39] duration metric: took 4m12.70748193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:16.640979  212456 kubeadm.go:597] duration metric: took 4m20.79960225s to restartPrimaryControlPlane
	W0414 17:48:16.641051  212456 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:48:16.641091  212456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:48:15.490793  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:15.504867  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:15.504941  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:15.538968  213635 cri.go:89] found id: ""
	I0414 17:48:15.538990  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.538998  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:15.539003  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:15.539049  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:15.573937  213635 cri.go:89] found id: ""
	I0414 17:48:15.573961  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.573968  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:15.573973  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:15.574019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:15.609320  213635 cri.go:89] found id: ""
	I0414 17:48:15.609346  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.609360  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:15.609367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:15.609425  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:15.641598  213635 cri.go:89] found id: ""
	I0414 17:48:15.641626  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.641635  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:15.641641  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:15.641695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:15.675213  213635 cri.go:89] found id: ""
	I0414 17:48:15.675239  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.675248  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:15.675255  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:15.675313  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:15.710542  213635 cri.go:89] found id: ""
	I0414 17:48:15.710565  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.710572  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:15.710578  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:15.710623  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:15.745699  213635 cri.go:89] found id: ""
	I0414 17:48:15.745724  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.745735  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:15.745742  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:15.745792  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:15.782559  213635 cri.go:89] found id: ""
	I0414 17:48:15.782586  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.782596  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:15.782605  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:15.782619  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:15.837926  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:15.837964  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:15.854293  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:15.854333  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:15.944741  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:15.944761  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:15.944773  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:16.032687  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:16.032716  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:14.784926  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:17.280964  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:17.886293  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:20.386161  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:18.574911  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:18.589009  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:18.589060  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:18.625705  213635 cri.go:89] found id: ""
	I0414 17:48:18.625730  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.625738  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:18.625743  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:18.625796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:18.659670  213635 cri.go:89] found id: ""
	I0414 17:48:18.659704  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.659713  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:18.659719  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:18.659762  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:18.694973  213635 cri.go:89] found id: ""
	I0414 17:48:18.694997  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.695005  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:18.695011  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:18.695083  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:18.733777  213635 cri.go:89] found id: ""
	I0414 17:48:18.733801  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.733808  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:18.733813  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:18.733881  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:18.765747  213635 cri.go:89] found id: ""
	I0414 17:48:18.765768  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.765775  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:18.765780  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:18.765856  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:18.799558  213635 cri.go:89] found id: ""
	I0414 17:48:18.799585  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.799595  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:18.799601  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:18.799653  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:18.835245  213635 cri.go:89] found id: ""
	I0414 17:48:18.835279  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.835291  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:18.835300  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:18.835354  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:18.870176  213635 cri.go:89] found id: ""
	I0414 17:48:18.870201  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.870212  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:18.870222  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:18.870236  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:18.883166  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:18.883195  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:18.946103  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:18.946128  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:18.946145  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:19.023462  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:19.023496  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:19.067254  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:19.067281  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:21.619412  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:21.635163  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:21.635233  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:21.671680  213635 cri.go:89] found id: ""
	I0414 17:48:21.671705  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.671713  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:21.671719  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:21.671767  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:21.709955  213635 cri.go:89] found id: ""
	I0414 17:48:21.709987  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.709998  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:21.710005  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:21.710064  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:21.743179  213635 cri.go:89] found id: ""
	I0414 17:48:21.743202  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.743209  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:21.743214  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:21.743267  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:21.775835  213635 cri.go:89] found id: ""
	I0414 17:48:21.775862  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.775870  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:21.775875  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:21.775920  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:21.810164  213635 cri.go:89] found id: ""
	I0414 17:48:21.810190  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.810201  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:21.810207  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:21.810253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:21.848616  213635 cri.go:89] found id: ""
	I0414 17:48:21.848639  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.848646  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:21.848651  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:21.848717  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:21.887985  213635 cri.go:89] found id: ""
	I0414 17:48:21.888014  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.888024  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:21.888030  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:21.888076  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:21.927965  213635 cri.go:89] found id: ""
	I0414 17:48:21.927992  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.928003  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:21.928013  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:21.928028  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:21.989253  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:21.989294  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:22.003399  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:22.003429  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:22.071849  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:22.071872  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:22.071889  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:22.149857  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:22.149888  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:19.283105  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:21.782995  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:22.388207  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:24.886911  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:24.691531  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:24.706169  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:24.706230  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:24.745747  213635 cri.go:89] found id: ""
	I0414 17:48:24.745780  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.745791  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:24.745799  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:24.745886  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:24.785261  213635 cri.go:89] found id: ""
	I0414 17:48:24.785284  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.785291  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:24.785296  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:24.785351  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:24.824491  213635 cri.go:89] found id: ""
	I0414 17:48:24.824525  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.824536  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:24.824550  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:24.824606  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:24.868655  213635 cri.go:89] found id: ""
	I0414 17:48:24.868683  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.868696  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:24.868704  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:24.868769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:24.910959  213635 cri.go:89] found id: ""
	I0414 17:48:24.910982  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.910989  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:24.910995  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:24.911053  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:24.944036  213635 cri.go:89] found id: ""
	I0414 17:48:24.944065  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.944073  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:24.944078  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:24.944127  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:24.977481  213635 cri.go:89] found id: ""
	I0414 17:48:24.977512  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.977522  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:24.977529  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:24.977589  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:25.010063  213635 cri.go:89] found id: ""
	I0414 17:48:25.010087  213635 logs.go:282] 0 containers: []
	W0414 17:48:25.010094  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:25.010103  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:25.010114  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:25.062645  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:25.062680  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:25.077120  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:25.077144  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:25.151533  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:25.151553  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:25.151565  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:25.230945  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:25.230985  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:27.774758  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:27.789640  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:27.789692  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:27.822128  213635 cri.go:89] found id: ""
	I0414 17:48:27.822162  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.822169  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:27.822175  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:27.822227  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:27.858364  213635 cri.go:89] found id: ""
	I0414 17:48:27.858394  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.858401  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:27.858406  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:27.858452  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:27.893587  213635 cri.go:89] found id: ""
	I0414 17:48:27.893618  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.893628  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:27.893636  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:27.893695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:27.930766  213635 cri.go:89] found id: ""
	I0414 17:48:27.930799  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.930810  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:27.930817  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:27.930879  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:27.962936  213635 cri.go:89] found id: ""
	I0414 17:48:27.962966  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.962977  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:27.962983  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:27.963036  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:27.999471  213635 cri.go:89] found id: ""
	I0414 17:48:27.999503  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.999511  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:27.999517  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:27.999575  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:28.030604  213635 cri.go:89] found id: ""
	I0414 17:48:28.030636  213635 logs.go:282] 0 containers: []
	W0414 17:48:28.030645  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:28.030650  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:28.030704  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:28.066407  213635 cri.go:89] found id: ""
	I0414 17:48:28.066436  213635 logs.go:282] 0 containers: []
	W0414 17:48:28.066446  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:28.066457  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:28.066471  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:28.118182  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:28.118210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:28.131007  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:28.131031  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:28.198468  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:28.198488  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:28.198500  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:24.283310  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:26.283749  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:27.386845  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:29.387642  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:28.286352  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:28.286387  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:30.826694  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:30.839877  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:30.839949  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:30.873980  213635 cri.go:89] found id: ""
	I0414 17:48:30.874010  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.874021  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:30.874028  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:30.874087  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:30.909567  213635 cri.go:89] found id: ""
	I0414 17:48:30.909593  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.909600  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:30.909606  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:30.909661  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:30.943382  213635 cri.go:89] found id: ""
	I0414 17:48:30.943414  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.943424  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:30.943431  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:30.943487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:30.976444  213635 cri.go:89] found id: ""
	I0414 17:48:30.976477  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.976488  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:30.976496  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:30.976555  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:31.010623  213635 cri.go:89] found id: ""
	I0414 17:48:31.010651  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.010662  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:31.010669  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:31.010727  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:31.049542  213635 cri.go:89] found id: ""
	I0414 17:48:31.049568  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.049578  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:31.049585  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:31.049646  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:31.082301  213635 cri.go:89] found id: ""
	I0414 17:48:31.082326  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.082336  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:31.082343  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:31.082403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:31.115742  213635 cri.go:89] found id: ""
	I0414 17:48:31.115768  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.115776  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:31.115784  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:31.115794  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:31.167568  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:31.167598  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:31.180202  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:31.180229  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:31.247958  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:31.247980  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:31.247995  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:31.337341  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:31.337379  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:28.780817  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:30.781721  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:32.782156  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:31.886992  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:34.386180  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:33.892139  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:33.905803  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:33.905884  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:33.945429  213635 cri.go:89] found id: ""
	I0414 17:48:33.945458  213635 logs.go:282] 0 containers: []
	W0414 17:48:33.945468  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:33.945476  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:33.945524  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:33.978018  213635 cri.go:89] found id: ""
	I0414 17:48:33.978047  213635 logs.go:282] 0 containers: []
	W0414 17:48:33.978056  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:33.978063  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:33.978113  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:34.013902  213635 cri.go:89] found id: ""
	I0414 17:48:34.013926  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.013934  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:34.013940  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:34.013986  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:34.052308  213635 cri.go:89] found id: ""
	I0414 17:48:34.052340  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.052351  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:34.052358  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:34.052423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:34.092541  213635 cri.go:89] found id: ""
	I0414 17:48:34.092565  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.092572  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:34.092577  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:34.092638  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:34.126690  213635 cri.go:89] found id: ""
	I0414 17:48:34.126725  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.126736  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:34.126745  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:34.126810  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:34.161043  213635 cri.go:89] found id: ""
	I0414 17:48:34.161072  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.161081  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:34.161087  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:34.161148  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:34.195793  213635 cri.go:89] found id: ""
	I0414 17:48:34.195817  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.195825  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:34.195835  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:34.195847  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:34.238858  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:34.238890  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:34.294092  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:34.294122  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:34.310473  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:34.310510  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:34.377489  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:34.377517  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:34.377535  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:36.963220  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:36.976594  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:36.976663  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:37.009685  213635 cri.go:89] found id: ""
	I0414 17:48:37.009710  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.009720  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:37.009727  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:37.009780  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:37.044805  213635 cri.go:89] found id: ""
	I0414 17:48:37.044832  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.044845  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:37.044852  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:37.044915  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:37.096059  213635 cri.go:89] found id: ""
	I0414 17:48:37.096082  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.096089  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:37.096094  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:37.096146  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:37.132630  213635 cri.go:89] found id: ""
	I0414 17:48:37.132654  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.132664  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:37.132670  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:37.132731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:37.168840  213635 cri.go:89] found id: ""
	I0414 17:48:37.168865  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.168874  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:37.168881  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:37.168940  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:37.202226  213635 cri.go:89] found id: ""
	I0414 17:48:37.202250  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.202258  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:37.202264  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:37.202321  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:37.236649  213635 cri.go:89] found id: ""
	I0414 17:48:37.236677  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.236687  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:37.236695  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:37.236758  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:37.270393  213635 cri.go:89] found id: ""
	I0414 17:48:37.270417  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.270427  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:37.270438  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:37.270454  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:37.320463  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:37.320492  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:37.334355  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:37.334388  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:37.402650  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:37.402674  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:37.402686  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:37.479961  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:37.479999  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:34.782317  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:37.285771  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:36.886679  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:39.386353  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:40.024993  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:40.038522  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:40.038578  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:40.075237  213635 cri.go:89] found id: ""
	I0414 17:48:40.075264  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.075274  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:40.075282  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:40.075342  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:40.117027  213635 cri.go:89] found id: ""
	I0414 17:48:40.117052  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.117059  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:40.117065  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:40.117130  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:40.150149  213635 cri.go:89] found id: ""
	I0414 17:48:40.150181  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.150193  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:40.150201  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:40.150265  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:40.185087  213635 cri.go:89] found id: ""
	I0414 17:48:40.185114  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.185122  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:40.185128  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:40.185179  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:40.219050  213635 cri.go:89] found id: ""
	I0414 17:48:40.219077  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.219084  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:40.219090  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:40.219137  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:40.252681  213635 cri.go:89] found id: ""
	I0414 17:48:40.252712  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.252723  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:40.252731  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:40.252796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:40.289524  213635 cri.go:89] found id: ""
	I0414 17:48:40.289551  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.289559  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:40.289564  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:40.289622  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:40.322952  213635 cri.go:89] found id: ""
	I0414 17:48:40.322986  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.322998  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:40.323009  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:40.323023  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:40.375012  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:40.375046  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:40.389868  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:40.389900  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:40.456829  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:40.456849  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:40.456861  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:40.537720  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:40.537759  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:43.079573  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:43.092754  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:43.092808  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:43.128097  213635 cri.go:89] found id: ""
	I0414 17:48:43.128131  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.128142  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:43.128150  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:43.128210  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:43.161361  213635 cri.go:89] found id: ""
	I0414 17:48:43.161391  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.161403  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:43.161410  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:43.161470  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:43.196698  213635 cri.go:89] found id: ""
	I0414 17:48:43.196780  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.196796  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:43.196807  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:43.196870  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:43.230687  213635 cri.go:89] found id: ""
	I0414 17:48:43.230717  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.230724  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:43.230729  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:43.230790  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:43.272118  213635 cri.go:89] found id: ""
	I0414 17:48:43.272143  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.272149  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:43.272155  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:43.272212  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:39.285905  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:41.782863  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:41.387417  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:43.886997  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:44.312670  212456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.671544959s)
	I0414 17:48:44.312762  212456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:48:44.332203  212456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:48:44.347886  212456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:48:44.360967  212456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:48:44.360988  212456 kubeadm.go:157] found existing configuration files:
	
	I0414 17:48:44.361036  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0414 17:48:44.374271  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:48:44.374334  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:48:44.391104  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0414 17:48:44.407332  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:48:44.407386  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:48:44.418237  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0414 17:48:44.427328  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:48:44.427373  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:48:44.437284  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0414 17:48:44.446412  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:48:44.446459  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:48:44.455796  212456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:48:44.629587  212456 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:48:43.305507  213635 cri.go:89] found id: ""
	I0414 17:48:43.305544  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.305557  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:43.305567  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:43.305667  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:43.342294  213635 cri.go:89] found id: ""
	I0414 17:48:43.342328  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.342339  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:43.342346  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:43.342403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:43.374476  213635 cri.go:89] found id: ""
	I0414 17:48:43.374502  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.374510  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:43.374519  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:43.374529  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:43.429817  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:43.429869  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:43.446168  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:43.446205  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:43.562603  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:43.562629  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:43.562647  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:43.647833  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:43.647873  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:46.192567  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:46.205502  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:46.205572  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:46.241592  213635 cri.go:89] found id: ""
	I0414 17:48:46.241618  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.241628  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:46.241635  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:46.241698  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:46.276977  213635 cri.go:89] found id: ""
	I0414 17:48:46.277004  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.277014  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:46.277020  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:46.277079  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:46.312906  213635 cri.go:89] found id: ""
	I0414 17:48:46.312930  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.312939  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:46.312946  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:46.313007  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:46.346994  213635 cri.go:89] found id: ""
	I0414 17:48:46.347018  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.347026  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:46.347031  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:46.347077  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:46.380069  213635 cri.go:89] found id: ""
	I0414 17:48:46.380093  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.380104  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:46.380111  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:46.380172  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:46.416546  213635 cri.go:89] found id: ""
	I0414 17:48:46.416574  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.416584  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:46.416592  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:46.416652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:46.453343  213635 cri.go:89] found id: ""
	I0414 17:48:46.453374  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.453386  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:46.453393  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:46.453447  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:46.490450  213635 cri.go:89] found id: ""
	I0414 17:48:46.490479  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.490489  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:46.490499  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:46.490513  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:46.551507  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:46.551542  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:46.565243  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:46.565272  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:46.636609  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:46.636634  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:46.636651  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:46.715829  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:46.715872  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:44.284758  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.782687  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.386592  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.880932  212269 pod_ready.go:82] duration metric: took 4m0.000148322s for pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace to be "Ready" ...
	E0414 17:48:46.880964  212269 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace to be "Ready" (will not retry!)
	I0414 17:48:46.880988  212269 pod_ready.go:39] duration metric: took 4m15.038784615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:46.881025  212269 kubeadm.go:597] duration metric: took 4m58.434849831s to restartPrimaryControlPlane
	W0414 17:48:46.881139  212269 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:48:46.881174  212269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:48:52.039840  212456 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:48:52.039919  212456 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:48:52.040033  212456 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:48:52.040172  212456 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:48:52.040311  212456 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:48:52.040403  212456 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:48:52.041680  212456 out.go:235]   - Generating certificates and keys ...
	I0414 17:48:52.041782  212456 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:48:52.041901  212456 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:48:52.042004  212456 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:48:52.042135  212456 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:48:52.042241  212456 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:48:52.042329  212456 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:48:52.042439  212456 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:48:52.042525  212456 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:48:52.042625  212456 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:48:52.042746  212456 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:48:52.042810  212456 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:48:52.042895  212456 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:48:52.042961  212456 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:48:52.043020  212456 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:48:52.043068  212456 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:48:52.043153  212456 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:48:52.043209  212456 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:48:52.043309  212456 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:48:52.043396  212456 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:48:52.044723  212456 out.go:235]   - Booting up control plane ...
	I0414 17:48:52.044821  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:48:52.044934  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:48:52.045009  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:48:52.045114  212456 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:48:52.045213  212456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:48:52.045252  212456 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:48:52.045398  212456 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:48:52.045503  212456 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:48:52.045581  212456 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.205474ms
	I0414 17:48:52.045662  212456 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:48:52.045714  212456 kubeadm.go:310] [api-check] The API server is healthy after 4.502044755s
	I0414 17:48:52.045804  212456 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:48:52.045996  212456 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:48:52.046104  212456 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:48:52.046335  212456 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-061428 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:48:52.046423  212456 kubeadm.go:310] [bootstrap-token] Using token: 0x0swo.cnocxvbqul1ca541
	I0414 17:48:52.047605  212456 out.go:235]   - Configuring RBAC rules ...
	I0414 17:48:52.047713  212456 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:48:52.047795  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:48:52.047959  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:48:52.048082  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:48:52.048237  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:48:52.048315  212456 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:48:52.048413  212456 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:48:52.048451  212456 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:48:52.048491  212456 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:48:52.048496  212456 kubeadm.go:310] 
	I0414 17:48:52.048549  212456 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:48:52.048555  212456 kubeadm.go:310] 
	I0414 17:48:52.048618  212456 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:48:52.048629  212456 kubeadm.go:310] 
	I0414 17:48:52.048653  212456 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:48:52.048710  212456 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:48:52.048756  212456 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:48:52.048762  212456 kubeadm.go:310] 
	I0414 17:48:52.048819  212456 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:48:52.048829  212456 kubeadm.go:310] 
	I0414 17:48:52.048872  212456 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:48:52.048878  212456 kubeadm.go:310] 
	I0414 17:48:52.048920  212456 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:48:52.048983  212456 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:48:52.049046  212456 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:48:52.049053  212456 kubeadm.go:310] 
	I0414 17:48:52.049156  212456 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:48:52.049245  212456 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:48:52.049251  212456 kubeadm.go:310] 
	I0414 17:48:52.049325  212456 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 0x0swo.cnocxvbqul1ca541 \
	I0414 17:48:52.049412  212456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:48:52.049431  212456 kubeadm.go:310] 	--control-plane 
	I0414 17:48:52.049437  212456 kubeadm.go:310] 
	I0414 17:48:52.049511  212456 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:48:52.049517  212456 kubeadm.go:310] 
	I0414 17:48:52.049584  212456 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 0x0swo.cnocxvbqul1ca541 \
	I0414 17:48:52.049724  212456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:48:52.049740  212456 cni.go:84] Creating CNI manager for ""
	I0414 17:48:52.049793  212456 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:48:52.051076  212456 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:48:52.052229  212456 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:48:52.062677  212456 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:48:52.080923  212456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:48:52.081020  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:52.081077  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-061428 minikube.k8s.io/updated_at=2025_04_14T17_48_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=default-k8s-diff-port-061428 minikube.k8s.io/primary=true
	I0414 17:48:52.125288  212456 ops.go:34] apiserver oom_adj: -16
	I0414 17:48:52.342710  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:52.842859  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:49.255006  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:49.277839  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:49.277915  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:49.340015  213635 cri.go:89] found id: ""
	I0414 17:48:49.340051  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.340063  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:49.340071  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:49.340143  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:49.375879  213635 cri.go:89] found id: ""
	I0414 17:48:49.375907  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.375917  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:49.375924  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:49.375987  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:49.408770  213635 cri.go:89] found id: ""
	I0414 17:48:49.408796  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.408806  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:49.408813  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:49.408877  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:49.446644  213635 cri.go:89] found id: ""
	I0414 17:48:49.446673  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.446682  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:49.446690  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:49.446758  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:49.486858  213635 cri.go:89] found id: ""
	I0414 17:48:49.486887  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.486897  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:49.486904  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:49.486964  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:49.525400  213635 cri.go:89] found id: ""
	I0414 17:48:49.525427  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.525437  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:49.525445  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:49.525507  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:49.559553  213635 cri.go:89] found id: ""
	I0414 17:48:49.559578  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.559587  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:49.559595  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:49.559656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:49.591090  213635 cri.go:89] found id: ""
	I0414 17:48:49.591123  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.591131  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:49.591144  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:49.591155  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:49.643807  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:49.643841  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:49.657066  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:49.657090  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:49.729359  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:49.729388  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:49.729404  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:49.808543  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:49.808573  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:52.348426  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:52.366010  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:52.366076  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:52.404950  213635 cri.go:89] found id: ""
	I0414 17:48:52.404976  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.404985  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:52.404991  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:52.405046  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:52.445893  213635 cri.go:89] found id: ""
	I0414 17:48:52.445927  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.445937  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:52.445945  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:52.446011  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:52.479635  213635 cri.go:89] found id: ""
	I0414 17:48:52.479657  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.479664  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:52.479671  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:52.479738  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:52.523616  213635 cri.go:89] found id: ""
	I0414 17:48:52.523650  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.523661  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:52.523669  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:52.523730  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:52.571706  213635 cri.go:89] found id: ""
	I0414 17:48:52.571739  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.571751  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:52.571758  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:52.571826  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:52.616799  213635 cri.go:89] found id: ""
	I0414 17:48:52.616822  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.616831  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:52.616836  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:52.616901  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:52.652373  213635 cri.go:89] found id: ""
	I0414 17:48:52.652402  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.652413  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:52.652420  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:52.652481  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:52.689582  213635 cri.go:89] found id: ""
	I0414 17:48:52.689614  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.689626  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:52.689637  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:52.689651  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:52.741215  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:52.741254  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:52.757324  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:52.757361  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:52.828589  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:52.828609  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:52.828621  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:52.918483  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:52.918527  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:49.290709  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:51.781114  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:53.343155  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:53.842838  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:54.343070  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:54.843789  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.342935  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.843502  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.939704  212456 kubeadm.go:1113] duration metric: took 3.858757705s to wait for elevateKubeSystemPrivileges
	I0414 17:48:55.939738  212456 kubeadm.go:394] duration metric: took 5m0.143792732s to StartCluster
	I0414 17:48:55.939772  212456 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:48:55.939872  212456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:48:55.941014  212456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:48:55.941300  212456 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:48:55.941438  212456 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:48:55.941538  212456 config.go:182] Loaded profile config "default-k8s-diff-port-061428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:48:55.941554  212456 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941576  212456 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941591  212456 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941600  212456 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-061428"
	I0414 17:48:55.941602  212456 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.941601  212456 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-061428"
	W0414 17:48:55.941614  212456 addons.go:247] addon dashboard should already be in state true
	I0414 17:48:55.941622  212456 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-061428"
	W0414 17:48:55.941645  212456 addons.go:247] addon metrics-server should already be in state true
	I0414 17:48:55.941654  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.941580  212456 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.941676  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	W0414 17:48:55.941703  212456 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:48:55.941749  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.942083  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942123  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942152  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942089  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942265  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942088  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942329  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942159  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.943212  212456 out.go:177] * Verifying Kubernetes components...
	I0414 17:48:55.944529  212456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:48:55.961205  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
	I0414 17:48:55.961205  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0414 17:48:55.961207  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0414 17:48:55.961746  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.961764  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.961872  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.962378  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962406  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962382  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962446  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962515  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962533  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962928  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963036  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963098  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.963185  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963383  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0414 17:48:55.963645  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.963676  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.963884  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.963930  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.964392  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.964780  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.964796  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.965235  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.965735  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.965770  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.966920  212456 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-061428"
	W0414 17:48:55.966941  212456 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:48:55.966965  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.967303  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.967339  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.981120  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0414 17:48:55.981603  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.982500  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.982521  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.982919  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.983222  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.983374  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0414 17:48:55.983676  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.987256  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.987275  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0414 17:48:55.987392  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.987404  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.987825  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.988138  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.988179  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.988192  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.988507  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.988780  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.988791  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.989758  212456 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:48:55.991265  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:48:55.991271  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.991283  212456 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:48:55.991300  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:55.992806  212456 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:48:55.993944  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.995202  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:55.995700  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:55.995715  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:55.995878  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:55.995970  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:55.996048  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:55.996310  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:55.998615  212456 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:48:55.998632  212456 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:48:55.999859  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:48:55.999877  212456 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:48:55.999893  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.000008  212456 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:48:56.000031  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:48:56.000048  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.003728  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004208  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.004226  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004232  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004445  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.004661  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.004738  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.004762  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004788  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.004926  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.005143  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.005294  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.005400  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.005546  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.015091  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0414 17:48:56.015439  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:56.015805  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:56.015814  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:56.016147  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:56.016520  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:56.016543  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:56.032058  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0414 17:48:56.032451  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:56.032966  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:56.032988  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:56.033343  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:56.033531  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:56.035026  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:56.035244  212456 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:48:56.035267  212456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:48:56.035289  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.037961  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.039361  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.039393  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.042043  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.042282  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.044137  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.044613  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.170857  212456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:48:56.201264  212456 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-061428" to be "Ready" ...
	I0414 17:48:56.215666  212456 node_ready.go:49] node "default-k8s-diff-port-061428" has status "Ready":"True"
	I0414 17:48:56.215687  212456 node_ready.go:38] duration metric: took 14.390119ms for node "default-k8s-diff-port-061428" to be "Ready" ...
	I0414 17:48:56.215698  212456 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:56.219556  212456 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:48:56.325515  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:48:56.328344  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:48:56.328369  212456 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:48:56.366616  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:48:56.366644  212456 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:48:56.366924  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:48:56.366947  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:48:56.400343  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:48:56.400365  212456 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:48:56.403134  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:48:56.450599  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:48:56.450631  212456 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:48:56.474003  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:48:56.474030  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:48:56.564681  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:48:56.564716  212456 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:48:56.565092  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:48:56.565114  212456 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:48:56.634647  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:48:56.667139  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:48:56.667170  212456 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:48:56.800483  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:48:56.800513  212456 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:48:56.844350  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:48:56.844380  212456 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:48:56.924656  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:48:56.924693  212456 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:48:57.009703  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:48:57.322557  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322593  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322574  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322695  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322923  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.322939  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.322953  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322961  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322979  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.322998  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.323007  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.323016  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.324913  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.324970  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.324986  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.324997  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.325005  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.325019  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.345450  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.345469  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.345740  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.345761  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.943361  212456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.308667432s)
	I0414 17:48:57.943408  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.943422  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.943797  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.943831  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.943842  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.943851  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.943880  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.944243  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.944262  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.944275  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.944294  212456 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.461925  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:55.475396  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:55.475472  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:55.511338  213635 cri.go:89] found id: ""
	I0414 17:48:55.511366  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.511374  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:55.511381  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:55.511444  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:55.547324  213635 cri.go:89] found id: ""
	I0414 17:48:55.547348  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.547355  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:55.547366  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:55.547423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:55.593274  213635 cri.go:89] found id: ""
	I0414 17:48:55.593303  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.593314  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:55.593322  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:55.593386  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:55.628013  213635 cri.go:89] found id: ""
	I0414 17:48:55.628042  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.628053  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:55.628060  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:55.628127  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:55.663752  213635 cri.go:89] found id: ""
	I0414 17:48:55.663786  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.663798  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:55.663805  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:55.663867  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:55.700578  213635 cri.go:89] found id: ""
	I0414 17:48:55.700601  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.700609  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:55.700614  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:55.700661  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:55.733772  213635 cri.go:89] found id: ""
	I0414 17:48:55.733797  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.733805  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:55.733811  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:55.733891  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:55.769135  213635 cri.go:89] found id: ""
	I0414 17:48:55.769161  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.769174  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:55.769184  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:55.769196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:55.810526  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:55.810560  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:55.863132  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:55.863166  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:55.879346  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:55.879381  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:55.961385  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:55.961403  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:55.961418  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:53.781674  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:55.784266  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.283947  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.225462  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:59.380615  212456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.370840717s)
	I0414 17:48:59.380686  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:59.380701  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:59.381003  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:59.381024  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:59.381039  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:59.381047  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:59.381256  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:59.381286  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:59.381299  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:59.382695  212456 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-061428 addons enable metrics-server
	
	I0414 17:48:59.383922  212456 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0414 17:48:59.385040  212456 addons.go:514] duration metric: took 3.443627022s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0414 17:49:00.227357  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:02.723936  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.566639  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:58.580841  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:58.580906  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:58.620613  213635 cri.go:89] found id: ""
	I0414 17:48:58.620647  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.620659  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:58.620668  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:58.620736  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:58.661513  213635 cri.go:89] found id: ""
	I0414 17:48:58.661549  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.661559  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:58.661567  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:58.661637  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:58.710480  213635 cri.go:89] found id: ""
	I0414 17:48:58.710512  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.710524  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:58.710531  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:58.710594  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:58.755300  213635 cri.go:89] found id: ""
	I0414 17:48:58.755328  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.755339  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:58.755346  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:58.755403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:58.791364  213635 cri.go:89] found id: ""
	I0414 17:48:58.791396  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.791416  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:58.791424  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:58.791490  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:58.830571  213635 cri.go:89] found id: ""
	I0414 17:48:58.830598  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.830610  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:58.830617  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:58.830677  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:58.864897  213635 cri.go:89] found id: ""
	I0414 17:48:58.864924  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.864933  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:58.864940  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:58.865000  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:58.900362  213635 cri.go:89] found id: ""
	I0414 17:48:58.900393  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.900403  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:58.900414  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:58.900431  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:58.953300  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:58.953340  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:58.974592  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:58.974634  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:59.054206  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:59.054234  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:59.054251  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:59.137354  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:59.137390  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:01.684252  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:01.702697  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:01.702776  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:01.746204  213635 cri.go:89] found id: ""
	I0414 17:49:01.746232  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.746276  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:01.746284  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:01.746347  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:01.784544  213635 cri.go:89] found id: ""
	I0414 17:49:01.784574  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.784584  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:01.784591  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:01.784649  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:01.821353  213635 cri.go:89] found id: ""
	I0414 17:49:01.821382  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.821392  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:01.821399  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:01.821454  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:01.855681  213635 cri.go:89] found id: ""
	I0414 17:49:01.855707  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.855715  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:01.855723  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:01.855783  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:01.891114  213635 cri.go:89] found id: ""
	I0414 17:49:01.891142  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.891153  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:01.891161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:01.891230  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:01.926536  213635 cri.go:89] found id: ""
	I0414 17:49:01.926570  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.926581  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:01.926588  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:01.926648  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:01.971430  213635 cri.go:89] found id: ""
	I0414 17:49:01.971455  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.971462  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:01.971468  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:01.971513  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:02.010416  213635 cri.go:89] found id: ""
	I0414 17:49:02.010444  213635 logs.go:282] 0 containers: []
	W0414 17:49:02.010452  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:02.010461  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:02.010476  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:02.093422  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:02.093451  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:02.093468  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:02.175219  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:02.175256  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:02.216929  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:02.216957  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:02.269151  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:02.269188  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:00.784029  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:03.284820  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:03.725360  212456 pod_ready.go:93] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.725386  212456 pod_ready.go:82] duration metric: took 7.505806576s for pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.725396  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.729623  212456 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.729653  212456 pod_ready.go:82] duration metric: took 4.248954ms for pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.729668  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.733261  212456 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.733283  212456 pod_ready.go:82] duration metric: took 3.605315ms for pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.733294  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:04.239874  212456 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:04.239896  212456 pod_ready.go:82] duration metric: took 506.59428ms for pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:04.239904  212456 pod_ready.go:39] duration metric: took 8.024194625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:04.239919  212456 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:49:04.239968  212456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:04.262907  212456 api_server.go:72] duration metric: took 8.321571945s to wait for apiserver process to appear ...
	I0414 17:49:04.262930  212456 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:49:04.262950  212456 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I0414 17:49:04.267486  212456 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I0414 17:49:04.268404  212456 api_server.go:141] control plane version: v1.32.2
	I0414 17:49:04.268420  212456 api_server.go:131] duration metric: took 5.484737ms to wait for apiserver health ...
	I0414 17:49:04.268432  212456 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:49:04.271870  212456 system_pods.go:59] 9 kube-system pods found
	I0414 17:49:04.271899  212456 system_pods.go:61] "coredns-668d6bf9bc-mdntl" [009622fa-7c7c-4903-945f-d2bbf5262a9b] Running
	I0414 17:49:04.271908  212456 system_pods.go:61] "coredns-668d6bf9bc-qhjnc" [97f585f4-e039-4c34-b132-9a56318e7ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:49:04.271918  212456 system_pods.go:61] "etcd-default-k8s-diff-port-061428" [3f7f2d5f-ae4c-4946-952c-9aae0156cf95] Running
	I0414 17:49:04.271924  212456 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-061428" [accdcd02-d8e2-447c-83f2-a6cd0b935b7b] Running
	I0414 17:49:04.271928  212456 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-061428" [08894510-d41c-4e93-b1a9-43888732429b] Running
	I0414 17:49:04.271931  212456 system_pods.go:61] "kube-proxy-2ft7c" [7d0e0148-267c-4421-846e-7d2f8f2f3a14] Running
	I0414 17:49:04.271935  212456 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-061428" [9d32a872-0f66-4f25-81f1-9707372dbc6f] Running
	I0414 17:49:04.271939  212456 system_pods.go:61] "metrics-server-f79f97bbb-g2k8m" [b02b8a70-ae5c-4677-83b5-b817fc733882] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:04.271945  212456 system_pods.go:61] "storage-provisioner" [4d1ccb5e-58d4-43ea-aca2-885ad7af9484] Running
	I0414 17:49:04.271951  212456 system_pods.go:74] duration metric: took 3.508628ms to wait for pod list to return data ...
	I0414 17:49:04.271959  212456 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:49:04.274062  212456 default_sa.go:45] found service account: "default"
	I0414 17:49:04.274080  212456 default_sa.go:55] duration metric: took 2.11536ms for default service account to be created ...
	I0414 17:49:04.274086  212456 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:49:04.324903  212456 system_pods.go:86] 9 kube-system pods found
	I0414 17:49:04.324934  212456 system_pods.go:89] "coredns-668d6bf9bc-mdntl" [009622fa-7c7c-4903-945f-d2bbf5262a9b] Running
	I0414 17:49:04.324947  212456 system_pods.go:89] "coredns-668d6bf9bc-qhjnc" [97f585f4-e039-4c34-b132-9a56318e7ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:49:04.324954  212456 system_pods.go:89] "etcd-default-k8s-diff-port-061428" [3f7f2d5f-ae4c-4946-952c-9aae0156cf95] Running
	I0414 17:49:04.324963  212456 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-061428" [accdcd02-d8e2-447c-83f2-a6cd0b935b7b] Running
	I0414 17:49:04.324968  212456 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-061428" [08894510-d41c-4e93-b1a9-43888732429b] Running
	I0414 17:49:04.324974  212456 system_pods.go:89] "kube-proxy-2ft7c" [7d0e0148-267c-4421-846e-7d2f8f2f3a14] Running
	I0414 17:49:04.324979  212456 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-061428" [9d32a872-0f66-4f25-81f1-9707372dbc6f] Running
	I0414 17:49:04.324987  212456 system_pods.go:89] "metrics-server-f79f97bbb-g2k8m" [b02b8a70-ae5c-4677-83b5-b817fc733882] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:04.324993  212456 system_pods.go:89] "storage-provisioner" [4d1ccb5e-58d4-43ea-aca2-885ad7af9484] Running
	I0414 17:49:04.325002  212456 system_pods.go:126] duration metric: took 50.910972ms to wait for k8s-apps to be running ...
	I0414 17:49:04.325021  212456 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:49:04.325080  212456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:04.339750  212456 system_svc.go:56] duration metric: took 14.732403ms WaitForService to wait for kubelet
	I0414 17:49:04.339775  212456 kubeadm.go:582] duration metric: took 8.398444377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:49:04.339798  212456 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:49:04.524559  212456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:49:04.524654  212456 node_conditions.go:123] node cpu capacity is 2
	I0414 17:49:04.524675  212456 node_conditions.go:105] duration metric: took 184.870799ms to run NodePressure ...
	I0414 17:49:04.524690  212456 start.go:241] waiting for startup goroutines ...
	I0414 17:49:04.524701  212456 start.go:246] waiting for cluster config update ...
	I0414 17:49:04.524721  212456 start.go:255] writing updated cluster config ...
	I0414 17:49:04.525044  212456 ssh_runner.go:195] Run: rm -f paused
	I0414 17:49:04.582311  212456 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:49:04.584154  212456 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-061428" cluster and "default" namespace by default
	I0414 17:49:04.787535  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:04.801528  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:04.801604  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:04.838408  213635 cri.go:89] found id: ""
	I0414 17:49:04.838442  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.838458  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:04.838466  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:04.838529  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:04.888614  213635 cri.go:89] found id: ""
	I0414 17:49:04.888645  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.888658  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:04.888667  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:04.888720  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:04.931279  213635 cri.go:89] found id: ""
	I0414 17:49:04.931307  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.931317  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:04.931325  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:04.931461  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:04.970024  213635 cri.go:89] found id: ""
	I0414 17:49:04.970052  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.970061  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:04.970069  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:04.970138  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:05.012914  213635 cri.go:89] found id: ""
	I0414 17:49:05.012938  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.012958  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:05.012967  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:05.013027  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:05.050788  213635 cri.go:89] found id: ""
	I0414 17:49:05.050811  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.050834  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:05.050842  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:05.050905  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:05.090988  213635 cri.go:89] found id: ""
	I0414 17:49:05.091017  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.091028  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:05.091036  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:05.091101  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:05.127104  213635 cri.go:89] found id: ""
	I0414 17:49:05.127138  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.127149  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:05.127160  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:05.127176  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:05.143792  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:05.143828  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:05.218655  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:05.218680  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:05.218697  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:05.306169  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:05.306201  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:05.347150  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:05.347190  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:07.907355  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:07.920775  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:07.920854  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:07.958486  213635 cri.go:89] found id: ""
	I0414 17:49:07.958517  213635 logs.go:282] 0 containers: []
	W0414 17:49:07.958527  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:07.958534  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:07.958600  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:07.995351  213635 cri.go:89] found id: ""
	I0414 17:49:07.995383  213635 logs.go:282] 0 containers: []
	W0414 17:49:07.995394  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:07.995401  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:07.995464  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:08.031830  213635 cri.go:89] found id: ""
	I0414 17:49:08.031864  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.031876  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:08.031885  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:08.031953  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:08.072277  213635 cri.go:89] found id: ""
	I0414 17:49:08.072308  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.072321  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:08.072328  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:08.072400  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:08.107778  213635 cri.go:89] found id: ""
	I0414 17:49:08.107811  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.107823  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:08.107832  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:08.107889  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:08.144220  213635 cri.go:89] found id: ""
	I0414 17:49:08.144254  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.144267  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:08.144276  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:08.144350  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:08.199205  213635 cri.go:89] found id: ""
	I0414 17:49:08.199238  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.199251  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:08.199260  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:08.199329  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:08.236929  213635 cri.go:89] found id: ""
	I0414 17:49:08.236966  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.236978  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:08.236989  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:08.237006  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:05.781883  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:07.782747  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:08.288285  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:08.288309  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:08.301531  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:08.301562  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:08.370610  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:08.370643  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:08.370663  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:08.449517  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:08.449559  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:10.989149  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:11.004705  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:11.004776  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:11.044842  213635 cri.go:89] found id: ""
	I0414 17:49:11.044872  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.044882  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:11.044889  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:11.044944  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:11.079268  213635 cri.go:89] found id: ""
	I0414 17:49:11.079296  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.079306  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:11.079313  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:11.079373  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:11.111894  213635 cri.go:89] found id: ""
	I0414 17:49:11.111921  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.111931  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:11.111937  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:11.111993  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:11.147005  213635 cri.go:89] found id: ""
	I0414 17:49:11.147029  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.147039  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:11.147046  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:11.147115  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:11.181246  213635 cri.go:89] found id: ""
	I0414 17:49:11.181274  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.181281  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:11.181286  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:11.181333  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:11.222368  213635 cri.go:89] found id: ""
	I0414 17:49:11.222396  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.222404  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:11.222409  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:11.222455  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:11.262336  213635 cri.go:89] found id: ""
	I0414 17:49:11.262360  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.262367  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:11.262373  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:11.262430  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:11.305115  213635 cri.go:89] found id: ""
	I0414 17:49:11.305146  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.305157  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:11.305168  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:11.305180  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:11.340697  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:11.340726  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:11.390526  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:11.390566  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:11.403671  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:11.403699  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:11.478187  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:11.478210  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:11.478225  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:10.282583  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:12.781281  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:14.950237  212269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.069030835s)
	I0414 17:49:14.950306  212269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:14.971834  212269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:49:14.987342  212269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:49:15.000668  212269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:49:15.000687  212269 kubeadm.go:157] found existing configuration files:
	
	I0414 17:49:15.000752  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:49:15.020443  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:49:15.020492  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:49:15.037229  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:49:15.049591  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:49:15.049642  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:49:15.059769  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:49:15.077786  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:49:15.077853  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:49:15.089728  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:49:15.100674  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:49:15.100715  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:49:15.111637  212269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:49:15.291703  212269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:49:14.068187  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:14.082429  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:14.082502  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:14.118294  213635 cri.go:89] found id: ""
	I0414 17:49:14.118322  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.118333  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:14.118339  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:14.118399  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:14.150631  213635 cri.go:89] found id: ""
	I0414 17:49:14.150661  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.150673  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:14.150680  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:14.150739  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:14.182138  213635 cri.go:89] found id: ""
	I0414 17:49:14.182168  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.182178  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:14.182191  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:14.182245  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:14.215897  213635 cri.go:89] found id: ""
	I0414 17:49:14.215926  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.215939  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:14.215945  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:14.216007  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:14.250709  213635 cri.go:89] found id: ""
	I0414 17:49:14.250735  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.250745  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:14.250752  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:14.250827  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:14.284335  213635 cri.go:89] found id: ""
	I0414 17:49:14.284359  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.284369  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:14.284377  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:14.284437  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:14.320670  213635 cri.go:89] found id: ""
	I0414 17:49:14.320695  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.320705  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:14.320712  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:14.320772  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:14.352588  213635 cri.go:89] found id: ""
	I0414 17:49:14.352612  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.352620  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:14.352630  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:14.352643  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:14.402495  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:14.402527  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:14.415185  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:14.415211  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:14.484937  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:14.484961  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:14.484976  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:14.568927  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:14.568962  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:17.105989  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:17.119732  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:17.119803  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:17.155999  213635 cri.go:89] found id: ""
	I0414 17:49:17.156027  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.156038  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:17.156046  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:17.156117  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:17.190158  213635 cri.go:89] found id: ""
	I0414 17:49:17.190180  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.190188  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:17.190193  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:17.190254  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:17.228075  213635 cri.go:89] found id: ""
	I0414 17:49:17.228116  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.228128  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:17.228135  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:17.228199  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:17.276284  213635 cri.go:89] found id: ""
	I0414 17:49:17.276311  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.276321  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:17.276328  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:17.276391  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:17.323644  213635 cri.go:89] found id: ""
	I0414 17:49:17.323672  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.323684  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:17.323691  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:17.323755  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:17.361870  213635 cri.go:89] found id: ""
	I0414 17:49:17.361898  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.361910  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:17.361917  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:17.361978  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:17.396346  213635 cri.go:89] found id: ""
	I0414 17:49:17.396371  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.396382  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:17.396389  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:17.396450  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:17.434395  213635 cri.go:89] found id: ""
	I0414 17:49:17.434425  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.434434  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:17.434445  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:17.434460  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:17.486946  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:17.486987  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:17.504167  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:17.504200  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:17.596627  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:17.596655  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:17.596671  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:17.688874  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:17.688911  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:15.285389  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:17.783942  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:20.238457  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:20.252780  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:20.252859  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:20.299511  213635 cri.go:89] found id: ""
	I0414 17:49:20.299535  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.299543  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:20.299549  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:20.299607  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:20.346458  213635 cri.go:89] found id: ""
	I0414 17:49:20.346484  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.346493  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:20.346500  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:20.346552  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:20.390657  213635 cri.go:89] found id: ""
	I0414 17:49:20.390677  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.390684  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:20.390689  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:20.390738  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:20.435444  213635 cri.go:89] found id: ""
	I0414 17:49:20.435468  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.435474  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:20.435480  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:20.435520  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:20.470010  213635 cri.go:89] found id: ""
	I0414 17:49:20.470030  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.470036  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:20.470044  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:20.470089  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:20.517097  213635 cri.go:89] found id: ""
	I0414 17:49:20.517130  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.517141  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:20.517149  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:20.517216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:20.558688  213635 cri.go:89] found id: ""
	I0414 17:49:20.558717  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.558727  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:20.558733  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:20.558796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:20.598644  213635 cri.go:89] found id: ""
	I0414 17:49:20.598679  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.598687  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:20.598695  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:20.598706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:20.674514  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:20.674571  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:20.691779  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:20.691808  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:20.759608  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:20.759640  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:20.759652  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:20.852072  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:20.852104  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:23.435254  212269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:49:23.435346  212269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:49:23.435469  212269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:49:23.435587  212269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:49:23.435698  212269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:49:23.435786  212269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:49:23.437325  212269 out.go:235]   - Generating certificates and keys ...
	I0414 17:49:23.437460  212269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:49:23.437553  212269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:49:23.437665  212269 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:49:23.437786  212269 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:49:23.437914  212269 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:49:23.438026  212269 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:49:23.438157  212269 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:49:23.438253  212269 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:49:23.438370  212269 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:49:23.438493  212269 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:49:23.438556  212269 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:49:23.438629  212269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:49:23.438700  212269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:49:23.438783  212269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:49:23.438855  212269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:49:23.438939  212269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:49:23.439013  212269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:49:23.439123  212269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:49:23.439213  212269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:49:23.440637  212269 out.go:235]   - Booting up control plane ...
	I0414 17:49:23.440748  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:49:23.440847  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:49:23.440957  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:49:23.441124  212269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:49:23.441250  212269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:49:23.441317  212269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:49:23.441508  212269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:49:23.441668  212269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:49:23.441883  212269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001443308s
	I0414 17:49:23.442009  212269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:49:23.442095  212269 kubeadm.go:310] [api-check] The API server is healthy after 5.001630109s
	I0414 17:49:23.442250  212269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:49:23.442407  212269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:49:23.442500  212269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:49:23.442809  212269 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-721806 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:49:23.442894  212269 kubeadm.go:310] [bootstrap-token] Using token: hi4egh.pplxy8fivi6fy4jt
	I0414 17:49:23.444130  212269 out.go:235]   - Configuring RBAC rules ...
	I0414 17:49:23.444269  212269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:49:23.444373  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:49:23.444555  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:49:23.444724  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:49:23.444870  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:49:23.444983  212269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:49:23.445140  212269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:49:23.445205  212269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:49:23.445269  212269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:49:23.445279  212269 kubeadm.go:310] 
	I0414 17:49:23.445361  212269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:49:23.445373  212269 kubeadm.go:310] 
	I0414 17:49:23.445471  212269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:49:23.445483  212269 kubeadm.go:310] 
	I0414 17:49:23.445514  212269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:49:23.445592  212269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:49:23.445659  212269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:49:23.445669  212269 kubeadm.go:310] 
	I0414 17:49:23.445746  212269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:49:23.445756  212269 kubeadm.go:310] 
	I0414 17:49:23.445816  212269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:49:23.445896  212269 kubeadm.go:310] 
	I0414 17:49:23.445976  212269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:49:23.446046  212269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:49:23.446113  212269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:49:23.446122  212269 kubeadm.go:310] 
	I0414 17:49:23.446188  212269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:49:23.446250  212269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:49:23.446255  212269 kubeadm.go:310] 
	I0414 17:49:23.446323  212269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hi4egh.pplxy8fivi6fy4jt \
	I0414 17:49:23.446414  212269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:49:23.446434  212269 kubeadm.go:310] 	--control-plane 
	I0414 17:49:23.446438  212269 kubeadm.go:310] 
	I0414 17:49:23.446507  212269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:49:23.446513  212269 kubeadm.go:310] 
	I0414 17:49:23.446582  212269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hi4egh.pplxy8fivi6fy4jt \
	I0414 17:49:23.446707  212269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:49:23.446730  212269 cni.go:84] Creating CNI manager for ""
	I0414 17:49:23.446739  212269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:49:23.448085  212269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:49:20.288087  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:22.783079  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:23.449087  212269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:49:23.461577  212269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:49:23.480701  212269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:49:23.480761  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:23.480789  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-721806 minikube.k8s.io/updated_at=2025_04_14T17_49_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=no-preload-721806 minikube.k8s.io/primary=true
	I0414 17:49:23.822239  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:23.822379  212269 ops.go:34] apiserver oom_adj: -16
	I0414 17:49:24.322913  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:24.822958  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:25.322967  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:25.823342  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:26.322688  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:26.822585  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.322370  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.823299  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.966937  212269 kubeadm.go:1113] duration metric: took 4.486233002s to wait for elevateKubeSystemPrivileges
	I0414 17:49:27.966971  212269 kubeadm.go:394] duration metric: took 5m39.576838178s to StartCluster
	I0414 17:49:27.966992  212269 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:49:27.967081  212269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:49:27.968121  212269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:49:27.968336  212269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:49:27.968477  212269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:49:27.968572  212269 config.go:182] Loaded profile config "no-preload-721806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:49:27.968640  212269 addons.go:69] Setting storage-provisioner=true in profile "no-preload-721806"
	I0414 17:49:27.968663  212269 addons.go:238] Setting addon storage-provisioner=true in "no-preload-721806"
	I0414 17:49:27.968667  212269 addons.go:69] Setting default-storageclass=true in profile "no-preload-721806"
	I0414 17:49:27.968685  212269 addons.go:69] Setting dashboard=true in profile "no-preload-721806"
	I0414 17:49:27.968689  212269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-721806"
	W0414 17:49:27.968693  212269 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:49:27.968698  212269 addons.go:69] Setting metrics-server=true in profile "no-preload-721806"
	I0414 17:49:27.968701  212269 addons.go:238] Setting addon dashboard=true in "no-preload-721806"
	W0414 17:49:27.968711  212269 addons.go:247] addon dashboard should already be in state true
	I0414 17:49:27.968713  212269 addons.go:238] Setting addon metrics-server=true in "no-preload-721806"
	W0414 17:49:27.968720  212269 addons.go:247] addon metrics-server should already be in state true
	I0414 17:49:27.968725  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.968737  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.968748  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.969136  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969159  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969174  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969190  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969136  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969242  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969294  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969328  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969547  212269 out.go:177] * Verifying Kubernetes components...
	I0414 17:49:27.970928  212269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:49:27.985862  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0414 17:49:27.985940  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0414 17:49:27.986359  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.986478  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.986876  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.986894  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.987035  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.987050  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.987339  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.987522  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:27.987561  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.988294  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.988321  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.988647  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I0414 17:49:27.989258  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.990683  212269 addons.go:238] Setting addon default-storageclass=true in "no-preload-721806"
	W0414 17:49:27.990703  212269 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:49:27.990734  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.991093  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.991124  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.991371  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0414 17:49:27.991468  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.991483  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.991880  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.992418  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.992453  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.992667  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.993166  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.993181  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.993592  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.994151  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.994179  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:28.006693  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34701
	I0414 17:49:28.006725  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0414 17:49:28.007104  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.007150  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.007487  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.007500  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.007611  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.007630  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.007860  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.008020  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.008067  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.008548  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:28.008586  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:28.010355  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.011939  212269 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:49:28.012527  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0414 17:49:28.013128  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.013676  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.013704  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.013896  212269 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:49:28.014150  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.014326  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.014618  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0414 17:49:28.014827  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:49:28.014838  212269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:49:28.014860  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.015140  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.015587  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.015603  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.016012  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.016211  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.016728  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.018254  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.018509  212269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:49:28.018914  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.019375  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.019390  212269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:49:23.392749  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:23.409465  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:23.409526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:23.449515  213635 cri.go:89] found id: ""
	I0414 17:49:23.449542  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.449552  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:23.449559  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:23.449609  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:23.490201  213635 cri.go:89] found id: ""
	I0414 17:49:23.490225  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.490234  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:23.490242  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:23.490294  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:23.528644  213635 cri.go:89] found id: ""
	I0414 17:49:23.528673  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.528684  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:23.528692  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:23.528755  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:23.572217  213635 cri.go:89] found id: ""
	I0414 17:49:23.572245  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.572256  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:23.572263  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:23.572319  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:23.612901  213635 cri.go:89] found id: ""
	I0414 17:49:23.612922  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.612930  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:23.612936  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:23.612981  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:23.668230  213635 cri.go:89] found id: ""
	I0414 17:49:23.668256  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.668265  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:23.668271  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:23.668322  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:23.714238  213635 cri.go:89] found id: ""
	I0414 17:49:23.714265  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.714275  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:23.714282  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:23.714331  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:23.763817  213635 cri.go:89] found id: ""
	I0414 17:49:23.763853  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.763863  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:23.763872  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:23.763884  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:23.836441  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:23.836486  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:23.861896  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:23.861940  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:23.944757  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:23.944787  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:23.944806  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:24.029884  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:24.029923  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:26.571950  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:26.585122  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:26.585180  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:26.623368  213635 cri.go:89] found id: ""
	I0414 17:49:26.623392  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.623401  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:26.623409  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:26.623463  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:26.657588  213635 cri.go:89] found id: ""
	I0414 17:49:26.657624  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.657635  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:26.657642  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:26.657699  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:26.690827  213635 cri.go:89] found id: ""
	I0414 17:49:26.690854  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.690862  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:26.690867  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:26.690916  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:26.732830  213635 cri.go:89] found id: ""
	I0414 17:49:26.732866  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.732876  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:26.732883  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:26.732946  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:26.767719  213635 cri.go:89] found id: ""
	I0414 17:49:26.767770  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.767783  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:26.767793  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:26.767861  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:26.805504  213635 cri.go:89] found id: ""
	I0414 17:49:26.805531  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.805540  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:26.805547  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:26.805607  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:26.848736  213635 cri.go:89] found id: ""
	I0414 17:49:26.848761  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.848769  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:26.848774  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:26.848831  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:26.888964  213635 cri.go:89] found id: ""
	I0414 17:49:26.888996  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.889006  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:26.889017  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:26.889030  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:26.902789  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:26.902819  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:26.984479  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:26.984503  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:26.984516  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:27.072453  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:27.072491  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:27.114247  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:27.114282  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:25.282623  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:27.781278  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:28.019381  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:49:28.019465  212269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:49:28.019483  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.019407  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.019634  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.019797  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.019918  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.020024  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.020513  212269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:49:28.020530  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:49:28.020546  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.024119  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.024370  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.024926  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.024940  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.024945  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.025142  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.025307  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.025318  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.025337  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.025447  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.025773  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.025953  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.026140  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.026298  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.028168  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0414 17:49:28.028575  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.028954  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.028977  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.029414  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.029592  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.031192  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.031456  212269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:49:28.031470  212269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:49:28.031486  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.034539  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.034997  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.035014  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.035149  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.035305  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.035463  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.035588  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.215025  212269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:49:28.277431  212269 node_ready.go:35] waiting up to 6m0s for node "no-preload-721806" to be "Ready" ...
	I0414 17:49:28.311336  212269 node_ready.go:49] node "no-preload-721806" has status "Ready":"True"
	I0414 17:49:28.311360  212269 node_ready.go:38] duration metric: took 33.901113ms for node "no-preload-721806" to be "Ready" ...
	I0414 17:49:28.311374  212269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:28.317467  212269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:28.374855  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:49:28.390490  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:49:28.390513  212269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:49:28.406595  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:49:28.437361  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:49:28.437392  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:49:28.469744  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:49:28.469782  212269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:49:28.521154  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:49:28.521179  212269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:49:28.548853  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:49:28.548878  212269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:49:28.614511  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:49:28.614541  212269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:49:28.649638  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:49:28.649661  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:49:28.703339  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:49:28.777954  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:49:28.777987  212269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:49:28.845025  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.845054  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.845362  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.845380  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.845392  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.845399  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.845652  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.845672  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.858160  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.858179  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.858491  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.858514  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.858515  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:28.893505  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:49:28.893539  212269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:49:28.960993  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:49:28.961020  212269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:49:29.067780  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:49:29.067815  212269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:49:29.129670  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:49:29.129698  212269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:49:29.201772  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:49:29.598669  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.192034026s)
	I0414 17:49:29.598739  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:29.598752  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:29.599101  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:29.599101  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:29.599154  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:29.599177  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:29.599191  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:29.599468  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:29.599477  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:29.599505  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.044475  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341048776s)
	I0414 17:49:30.044551  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:30.044569  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:30.044858  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:30.044874  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.044884  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:30.044891  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:30.045277  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:30.045289  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:30.045341  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.045355  212269 addons.go:479] Verifying addon metrics-server=true in "no-preload-721806"
	I0414 17:49:30.329870  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:31.062251  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.860435662s)
	I0414 17:49:31.062298  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:31.062312  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:31.062629  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:31.062652  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:31.062662  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:31.062670  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:31.062906  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:31.062951  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:31.062964  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:31.064362  212269 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-721806 addons enable metrics-server
	
	I0414 17:49:31.065558  212269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 17:49:29.668064  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:29.685205  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:29.685289  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:29.729725  213635 cri.go:89] found id: ""
	I0414 17:49:29.729753  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.729760  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:29.729766  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:29.729823  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:29.788536  213635 cri.go:89] found id: ""
	I0414 17:49:29.788569  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.788581  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:29.788588  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:29.788656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:29.832032  213635 cri.go:89] found id: ""
	I0414 17:49:29.832060  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.832069  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:29.832074  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:29.832123  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:29.864981  213635 cri.go:89] found id: ""
	I0414 17:49:29.865009  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.865019  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:29.865025  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:29.865091  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:29.901024  213635 cri.go:89] found id: ""
	I0414 17:49:29.901060  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.901071  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:29.901079  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:29.901149  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:29.938790  213635 cri.go:89] found id: ""
	I0414 17:49:29.938820  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.938832  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:29.938840  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:29.938912  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:29.981414  213635 cri.go:89] found id: ""
	I0414 17:49:29.981445  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.981456  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:29.981463  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:29.981526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:30.022510  213635 cri.go:89] found id: ""
	I0414 17:49:30.022545  213635 logs.go:282] 0 containers: []
	W0414 17:49:30.022558  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:30.022571  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:30.022588  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:30.077221  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:30.077255  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:30.091513  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:30.091552  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:30.164964  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:30.164991  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:30.165004  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:30.246281  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:30.246321  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:32.807018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:32.825456  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:32.825531  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:32.864079  213635 cri.go:89] found id: ""
	I0414 17:49:32.864116  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.864126  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:32.864133  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:32.864191  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:32.905763  213635 cri.go:89] found id: ""
	I0414 17:49:32.905792  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.905806  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:32.905813  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:32.905894  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:32.944126  213635 cri.go:89] found id: ""
	I0414 17:49:32.944167  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.944186  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:32.944195  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:32.944258  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:32.983511  213635 cri.go:89] found id: ""
	I0414 17:49:32.983549  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.983562  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:32.983571  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:32.983629  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:33.021383  213635 cri.go:89] found id: ""
	I0414 17:49:33.021411  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.021422  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:33.021429  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:33.021488  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:33.058181  213635 cri.go:89] found id: ""
	I0414 17:49:33.058214  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.058225  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:33.058233  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:33.058296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:33.094426  213635 cri.go:89] found id: ""
	I0414 17:49:33.094459  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.094470  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:33.094479  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:33.094537  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:33.139392  213635 cri.go:89] found id: ""
	I0414 17:49:33.139430  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.139443  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:33.139455  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:33.139471  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:33.218814  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:33.218842  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:33.218860  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:29.783892  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:32.282499  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:31.066728  212269 addons.go:514] duration metric: took 3.098264633s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 17:49:32.824809  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:35.323008  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:33.325637  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:33.325678  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:33.363443  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:33.363473  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:33.427131  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:33.427167  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:35.942712  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:35.957936  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:35.958027  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:35.998316  213635 cri.go:89] found id: ""
	I0414 17:49:35.998343  213635 logs.go:282] 0 containers: []
	W0414 17:49:35.998354  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:35.998361  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:35.998419  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:36.032107  213635 cri.go:89] found id: ""
	I0414 17:49:36.032139  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.032149  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:36.032156  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:36.032211  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:36.070010  213635 cri.go:89] found id: ""
	I0414 17:49:36.070035  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.070043  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:36.070049  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:36.070104  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:36.105914  213635 cri.go:89] found id: ""
	I0414 17:49:36.105944  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.105962  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:36.105970  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:36.106036  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:36.140378  213635 cri.go:89] found id: ""
	I0414 17:49:36.140406  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.140418  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:36.140425  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:36.140487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:36.178535  213635 cri.go:89] found id: ""
	I0414 17:49:36.178564  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.178575  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:36.178583  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:36.178652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:36.217284  213635 cri.go:89] found id: ""
	I0414 17:49:36.217314  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.217324  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:36.217330  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:36.217391  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:36.251770  213635 cri.go:89] found id: ""
	I0414 17:49:36.251805  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.251818  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:36.251835  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:36.251850  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:36.322858  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:36.322906  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:36.337902  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:36.337939  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:36.415729  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:36.415752  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:36.415767  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:36.512960  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:36.513000  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:36.827356  212269 pod_ready.go:93] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.827377  212269 pod_ready.go:82] duration metric: took 8.509888872s for pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.827386  212269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.869474  212269 pod_ready.go:93] pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.869506  212269 pod_ready.go:82] duration metric: took 42.1117ms for pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.869522  212269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.896002  212269 pod_ready.go:93] pod "etcd-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.896034  212269 pod_ready.go:82] duration metric: took 26.503053ms for pod "etcd-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.896046  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.910284  212269 pod_ready.go:93] pod "kube-apiserver-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.910332  212269 pod_ready.go:82] duration metric: took 14.277535ms for pod "kube-apiserver-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.910360  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.917658  212269 pod_ready.go:93] pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.917678  212269 pod_ready.go:82] duration metric: took 7.305319ms for pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.917689  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tktgt" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.227025  212269 pod_ready.go:93] pod "kube-proxy-tktgt" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:37.227047  212269 pod_ready.go:82] duration metric: took 309.350302ms for pod "kube-proxy-tktgt" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.227056  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.621871  212269 pod_ready.go:93] pod "kube-scheduler-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:37.621901  212269 pod_ready.go:82] duration metric: took 394.836681ms for pod "kube-scheduler-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.621909  212269 pod_ready.go:39] duration metric: took 9.310525251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:37.621924  212269 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:49:37.621974  212269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:37.660143  212269 api_server.go:72] duration metric: took 9.691771257s to wait for apiserver process to appear ...
	I0414 17:49:37.660171  212269 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:49:37.660193  212269 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I0414 17:49:37.665313  212269 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I0414 17:49:37.666371  212269 api_server.go:141] control plane version: v1.32.2
	I0414 17:49:37.666390  212269 api_server.go:131] duration metric: took 6.212109ms to wait for apiserver health ...
	I0414 17:49:37.666397  212269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:49:37.823477  212269 system_pods.go:59] 9 kube-system pods found
	I0414 17:49:37.823504  212269 system_pods.go:61] "coredns-668d6bf9bc-6cjwn" [3fb5680f-8bc6-4d35-abbf-19108c2242d3] Running
	I0414 17:49:37.823509  212269 system_pods.go:61] "coredns-668d6bf9bc-bng87" [0ae7cd1a-9760-43aa-b0ac-9f66c7e505d2] Running
	I0414 17:49:37.823513  212269 system_pods.go:61] "etcd-no-preload-721806" [6f30ffea-8f3a-4e21-9fd6-c9366bb997e2] Running
	I0414 17:49:37.823516  212269 system_pods.go:61] "kube-apiserver-no-preload-721806" [bc7d4172-ee21-4d53-a4a6-9bb7272d8b24] Running
	I0414 17:49:37.823521  212269 system_pods.go:61] "kube-controller-manager-no-preload-721806" [346266a0-a376-466c-9ebb-46772557740b] Running
	I0414 17:49:37.823525  212269 system_pods.go:61] "kube-proxy-tktgt" [984a1b9b-3c51-45d0-86bd-3ca64d1b3af8] Running
	I0414 17:49:37.823529  212269 system_pods.go:61] "kube-scheduler-no-preload-721806" [2294ad27-ffc4-4181-9bef-f865956252ac] Running
	I0414 17:49:37.823537  212269 system_pods.go:61] "metrics-server-f79f97bbb-f99gx" [c2d0b638-6f0e-41d7-b4e3-4e0f5a619c86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:37.823547  212269 system_pods.go:61] "storage-provisioner" [463e19f1-b7aa-46ff-b5c7-99e1207bff9e] Running
	I0414 17:49:37.823561  212269 system_pods.go:74] duration metric: took 157.157807ms to wait for pod list to return data ...
	I0414 17:49:37.823571  212269 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:49:38.021598  212269 default_sa.go:45] found service account: "default"
	I0414 17:49:38.021626  212269 default_sa.go:55] duration metric: took 198.045961ms for default service account to be created ...
	I0414 17:49:38.021642  212269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:49:38.222171  212269 system_pods.go:86] 9 kube-system pods found
	I0414 17:49:38.222205  212269 system_pods.go:89] "coredns-668d6bf9bc-6cjwn" [3fb5680f-8bc6-4d35-abbf-19108c2242d3] Running
	I0414 17:49:38.222210  212269 system_pods.go:89] "coredns-668d6bf9bc-bng87" [0ae7cd1a-9760-43aa-b0ac-9f66c7e505d2] Running
	I0414 17:49:38.222214  212269 system_pods.go:89] "etcd-no-preload-721806" [6f30ffea-8f3a-4e21-9fd6-c9366bb997e2] Running
	I0414 17:49:38.222217  212269 system_pods.go:89] "kube-apiserver-no-preload-721806" [bc7d4172-ee21-4d53-a4a6-9bb7272d8b24] Running
	I0414 17:49:38.222220  212269 system_pods.go:89] "kube-controller-manager-no-preload-721806" [346266a0-a376-466c-9ebb-46772557740b] Running
	I0414 17:49:38.222224  212269 system_pods.go:89] "kube-proxy-tktgt" [984a1b9b-3c51-45d0-86bd-3ca64d1b3af8] Running
	I0414 17:49:38.222228  212269 system_pods.go:89] "kube-scheduler-no-preload-721806" [2294ad27-ffc4-4181-9bef-f865956252ac] Running
	I0414 17:49:38.222233  212269 system_pods.go:89] "metrics-server-f79f97bbb-f99gx" [c2d0b638-6f0e-41d7-b4e3-4e0f5a619c86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:38.222237  212269 system_pods.go:89] "storage-provisioner" [463e19f1-b7aa-46ff-b5c7-99e1207bff9e] Running
	I0414 17:49:38.222247  212269 system_pods.go:126] duration metric: took 200.597392ms to wait for k8s-apps to be running ...
	I0414 17:49:38.222257  212269 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:49:38.222316  212269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:38.258014  212269 system_svc.go:56] duration metric: took 35.747059ms WaitForService to wait for kubelet
	I0414 17:49:38.258046  212269 kubeadm.go:582] duration metric: took 10.289680192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:49:38.258069  212269 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:49:38.422770  212269 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:49:38.422805  212269 node_conditions.go:123] node cpu capacity is 2
	I0414 17:49:38.422833  212269 node_conditions.go:105] duration metric: took 164.757743ms to run NodePressure ...
	I0414 17:49:38.422848  212269 start.go:241] waiting for startup goroutines ...
	I0414 17:49:38.422858  212269 start.go:246] waiting for cluster config update ...
	I0414 17:49:38.422873  212269 start.go:255] writing updated cluster config ...
	I0414 17:49:38.423253  212269 ssh_runner.go:195] Run: rm -f paused
	I0414 17:49:38.493521  212269 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:49:38.495382  212269 out.go:177] * Done! kubectl is now configured to use "no-preload-721806" cluster and "default" namespace by default
	I0414 17:49:34.781757  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:36.781990  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:39.053905  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:39.068768  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:39.068841  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:39.104418  213635 cri.go:89] found id: ""
	I0414 17:49:39.104446  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.104454  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:39.104460  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:39.104520  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:39.144556  213635 cri.go:89] found id: ""
	I0414 17:49:39.144587  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.144598  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:39.144605  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:39.144673  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:39.184890  213635 cri.go:89] found id: ""
	I0414 17:49:39.184923  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.184936  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:39.184946  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:39.185018  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:39.224321  213635 cri.go:89] found id: ""
	I0414 17:49:39.224353  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.224364  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:39.224372  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:39.224431  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:39.275363  213635 cri.go:89] found id: ""
	I0414 17:49:39.275393  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.275403  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:39.275411  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:39.275469  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:39.324682  213635 cri.go:89] found id: ""
	I0414 17:49:39.324715  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.324725  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:39.324733  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:39.324788  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:39.356862  213635 cri.go:89] found id: ""
	I0414 17:49:39.356891  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.356901  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:39.356908  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:39.356970  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:39.392157  213635 cri.go:89] found id: ""
	I0414 17:49:39.392186  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.392197  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:39.392208  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:39.392223  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:39.484945  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:39.484971  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:39.484989  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:39.564891  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:39.564927  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:39.608513  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:39.608543  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:39.672726  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:39.672760  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:42.189948  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:42.203489  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:42.203560  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:42.243021  213635 cri.go:89] found id: ""
	I0414 17:49:42.243047  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.243057  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:42.243064  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:42.243152  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:42.285782  213635 cri.go:89] found id: ""
	I0414 17:49:42.285807  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.285817  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:42.285824  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:42.285898  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:42.318326  213635 cri.go:89] found id: ""
	I0414 17:49:42.318350  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.318360  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:42.318367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:42.318421  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:42.351765  213635 cri.go:89] found id: ""
	I0414 17:49:42.351788  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.351795  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:42.351802  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:42.351862  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:42.382539  213635 cri.go:89] found id: ""
	I0414 17:49:42.382564  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.382574  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:42.382582  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:42.382639  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:42.416009  213635 cri.go:89] found id: ""
	I0414 17:49:42.416034  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.416044  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:42.416051  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:42.416107  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:42.447820  213635 cri.go:89] found id: ""
	I0414 17:49:42.447860  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.447871  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:42.447879  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:42.447941  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:42.486157  213635 cri.go:89] found id: ""
	I0414 17:49:42.486179  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.486186  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:42.486195  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:42.486210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:42.556937  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:42.556963  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:42.556980  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:42.636537  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:42.636569  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:42.676688  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:42.676717  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:42.728391  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:42.728421  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:38.783981  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:41.281841  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:43.282020  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:45.242452  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:45.256486  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:45.256558  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:45.291454  213635 cri.go:89] found id: ""
	I0414 17:49:45.291482  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.291490  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:45.291497  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:45.291552  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:45.328550  213635 cri.go:89] found id: ""
	I0414 17:49:45.328573  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.328583  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:45.328591  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:45.328638  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:45.365121  213635 cri.go:89] found id: ""
	I0414 17:49:45.365148  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.365155  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:45.365161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:45.365216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:45.402479  213635 cri.go:89] found id: ""
	I0414 17:49:45.402508  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.402519  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:45.402527  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:45.402580  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:45.433123  213635 cri.go:89] found id: ""
	I0414 17:49:45.433147  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.433155  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:45.433160  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:45.433206  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:45.466351  213635 cri.go:89] found id: ""
	I0414 17:49:45.466376  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.466383  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:45.466390  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:45.466442  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:45.498745  213635 cri.go:89] found id: ""
	I0414 17:49:45.498774  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.498785  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:45.498792  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:45.498866  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:45.531870  213635 cri.go:89] found id: ""
	I0414 17:49:45.531898  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.531908  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:45.531919  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:45.531937  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:45.582230  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:45.582257  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:45.597164  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:45.597197  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:45.666569  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:45.666598  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:45.666616  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:45.746036  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:45.746068  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:45.782620  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:48.280928  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:48.284590  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:48.297947  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:48.298019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:48.331443  213635 cri.go:89] found id: ""
	I0414 17:49:48.331469  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.331480  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:48.331487  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:48.331534  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:48.364569  213635 cri.go:89] found id: ""
	I0414 17:49:48.364602  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.364613  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:48.364620  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:48.364683  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:48.398063  213635 cri.go:89] found id: ""
	I0414 17:49:48.398097  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.398109  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:48.398118  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:48.398182  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:48.430783  213635 cri.go:89] found id: ""
	I0414 17:49:48.430808  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.430829  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:48.430837  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:48.430924  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:48.466378  213635 cri.go:89] found id: ""
	I0414 17:49:48.466410  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.466423  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:48.466432  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:48.466656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:48.499766  213635 cri.go:89] found id: ""
	I0414 17:49:48.499819  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.499829  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:48.499837  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:48.499901  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:48.533192  213635 cri.go:89] found id: ""
	I0414 17:49:48.533218  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.533228  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:48.533235  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:48.533294  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:48.565138  213635 cri.go:89] found id: ""
	I0414 17:49:48.565159  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.565167  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:48.565174  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:48.565183  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:48.616578  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:48.616609  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:48.630209  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:48.630232  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:48.697158  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:48.697184  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:48.697196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:48.777141  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:48.777177  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:51.322807  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:51.336971  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:51.337037  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:51.373592  213635 cri.go:89] found id: ""
	I0414 17:49:51.373616  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.373623  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:51.373628  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:51.373675  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:51.410753  213635 cri.go:89] found id: ""
	I0414 17:49:51.410782  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.410791  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:51.410796  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:51.410846  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:51.443612  213635 cri.go:89] found id: ""
	I0414 17:49:51.443639  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.443650  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:51.443656  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:51.443717  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:51.476956  213635 cri.go:89] found id: ""
	I0414 17:49:51.476982  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.476990  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:51.476995  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:51.477041  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:51.512295  213635 cri.go:89] found id: ""
	I0414 17:49:51.512330  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.512349  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:51.512357  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:51.512420  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:51.553410  213635 cri.go:89] found id: ""
	I0414 17:49:51.553437  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.553445  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:51.553451  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:51.553514  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:51.593165  213635 cri.go:89] found id: ""
	I0414 17:49:51.593196  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.593205  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:51.593210  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:51.593259  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:51.634382  213635 cri.go:89] found id: ""
	I0414 17:49:51.634425  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.634436  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:51.634446  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:51.634457  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:51.687688  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:51.687725  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:51.703569  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:51.703600  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:51.775371  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:51.775398  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:51.775414  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:51.851890  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:51.851936  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:50.282042  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:52.782200  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:54.389539  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:54.403233  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:54.403293  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:54.447655  213635 cri.go:89] found id: ""
	I0414 17:49:54.447675  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.447683  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:54.447690  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:54.447736  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:54.486882  213635 cri.go:89] found id: ""
	I0414 17:49:54.486905  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.486912  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:54.486917  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:54.486977  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:54.519544  213635 cri.go:89] found id: ""
	I0414 17:49:54.519570  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.519581  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:54.519588  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:54.519643  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:54.558646  213635 cri.go:89] found id: ""
	I0414 17:49:54.558671  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.558681  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:54.558689  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:54.558735  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:54.600650  213635 cri.go:89] found id: ""
	I0414 17:49:54.600674  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.600680  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:54.600685  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:54.600732  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:54.641206  213635 cri.go:89] found id: ""
	I0414 17:49:54.641231  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.641240  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:54.641247  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:54.641302  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:54.680671  213635 cri.go:89] found id: ""
	I0414 17:49:54.680698  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.680708  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:54.680715  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:54.680765  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:54.721028  213635 cri.go:89] found id: ""
	I0414 17:49:54.721050  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.721056  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:54.721066  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:54.721076  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:54.769755  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:54.769782  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:54.785252  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:54.785273  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:54.855288  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:54.855308  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:54.855322  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:54.952695  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:54.952735  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:57.499933  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:57.514593  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:57.514658  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:57.549526  213635 cri.go:89] found id: ""
	I0414 17:49:57.549550  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.549558  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:57.549564  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:57.549610  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:57.582596  213635 cri.go:89] found id: ""
	I0414 17:49:57.582626  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.582637  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:57.582643  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:57.582695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:57.622214  213635 cri.go:89] found id: ""
	I0414 17:49:57.622244  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.622252  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:57.622257  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:57.622313  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:57.655388  213635 cri.go:89] found id: ""
	I0414 17:49:57.655415  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.655422  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:57.655428  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:57.655474  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:57.692324  213635 cri.go:89] found id: ""
	I0414 17:49:57.692349  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.692357  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:57.692362  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:57.692407  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:57.725614  213635 cri.go:89] found id: ""
	I0414 17:49:57.725637  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.725644  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:57.725650  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:57.725700  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:57.757747  213635 cri.go:89] found id: ""
	I0414 17:49:57.757779  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.757788  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:57.757794  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:57.757868  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:57.791614  213635 cri.go:89] found id: ""
	I0414 17:49:57.791651  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.791658  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:57.791666  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:57.791676  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:57.839950  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:57.839983  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:57.852850  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:57.852877  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:57.925310  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:57.925338  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:57.925355  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:58.008445  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:58.008484  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:54.783081  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:57.282711  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:00.550402  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:00.564239  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:00.564296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:00.598410  213635 cri.go:89] found id: ""
	I0414 17:50:00.598439  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.598447  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:00.598452  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:00.598500  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:00.629470  213635 cri.go:89] found id: ""
	I0414 17:50:00.629489  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.629497  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:00.629502  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:00.629547  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:00.660663  213635 cri.go:89] found id: ""
	I0414 17:50:00.660686  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.660695  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:00.660703  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:00.660780  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:00.703422  213635 cri.go:89] found id: ""
	I0414 17:50:00.703450  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.703461  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:00.703467  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:00.703524  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:00.736355  213635 cri.go:89] found id: ""
	I0414 17:50:00.736378  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.736388  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:00.736394  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:00.736447  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:00.771432  213635 cri.go:89] found id: ""
	I0414 17:50:00.771460  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.771470  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:00.771478  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:00.771544  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:00.804453  213635 cri.go:89] found id: ""
	I0414 17:50:00.804474  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.804483  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:00.804490  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:00.804550  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:00.840934  213635 cri.go:89] found id: ""
	I0414 17:50:00.840962  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.840971  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:00.840982  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:00.840994  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:00.888813  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:00.888846  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:00.901168  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:00.901188  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:00.970608  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:00.970638  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:00.970655  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:01.054190  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:01.054225  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:59.781167  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:01.783383  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:03.592930  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:03.607476  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:03.607542  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:03.647536  213635 cri.go:89] found id: ""
	I0414 17:50:03.647559  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.647567  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:03.647572  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:03.647616  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:03.687053  213635 cri.go:89] found id: ""
	I0414 17:50:03.687078  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.687086  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:03.687092  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:03.687135  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:03.724232  213635 cri.go:89] found id: ""
	I0414 17:50:03.724258  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.724268  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:03.724276  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:03.724327  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:03.758621  213635 cri.go:89] found id: ""
	I0414 17:50:03.758650  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.758661  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:03.758668  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:03.758735  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:03.792524  213635 cri.go:89] found id: ""
	I0414 17:50:03.792553  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.792563  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:03.792570  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:03.792623  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:03.823533  213635 cri.go:89] found id: ""
	I0414 17:50:03.823562  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.823569  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:03.823575  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:03.823619  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:03.855038  213635 cri.go:89] found id: ""
	I0414 17:50:03.855060  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.855067  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:03.855072  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:03.855122  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:03.886260  213635 cri.go:89] found id: ""
	I0414 17:50:03.886288  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.886296  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:03.886304  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:03.886314  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:03.935750  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:03.935780  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:03.948571  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:03.948599  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:04.016600  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:04.016625  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:04.016641  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:04.095247  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:04.095278  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:06.633583  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:06.647292  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:06.647371  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:06.680994  213635 cri.go:89] found id: ""
	I0414 17:50:06.681023  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.681031  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:06.681036  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:06.681093  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:06.715235  213635 cri.go:89] found id: ""
	I0414 17:50:06.715262  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.715269  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:06.715275  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:06.715333  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:06.750320  213635 cri.go:89] found id: ""
	I0414 17:50:06.750349  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.750359  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:06.750367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:06.750425  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:06.781634  213635 cri.go:89] found id: ""
	I0414 17:50:06.781657  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.781666  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:06.781673  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:06.781731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:06.812684  213635 cri.go:89] found id: ""
	I0414 17:50:06.812709  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.812719  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:06.812727  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:06.812785  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:06.843417  213635 cri.go:89] found id: ""
	I0414 17:50:06.843447  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.843458  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:06.843466  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:06.843519  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:06.878915  213635 cri.go:89] found id: ""
	I0414 17:50:06.878943  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.878952  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:06.878958  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:06.879018  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:06.911647  213635 cri.go:89] found id: ""
	I0414 17:50:06.911670  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.911680  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:06.911705  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:06.911720  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:06.977253  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:06.977286  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:06.977304  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:07.056442  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:07.056475  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:07.104053  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:07.104082  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:07.153444  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:07.153483  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:04.281983  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:04.776666  213406 pod_ready.go:82] duration metric: took 4m0.000384507s for pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace to be "Ready" ...
	E0414 17:50:04.776701  213406 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0414 17:50:04.776719  213406 pod_ready.go:39] duration metric: took 4m12.533820908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:04.776753  213406 kubeadm.go:597] duration metric: took 4m20.355244776s to restartPrimaryControlPlane
	W0414 17:50:04.776834  213406 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:50:04.776879  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:50:09.667392  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:09.680695  213635 kubeadm.go:597] duration metric: took 4m3.288338716s to restartPrimaryControlPlane
	W0414 17:50:09.680757  213635 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:50:09.680787  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:50:15.123013  213635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.442204913s)
	I0414 17:50:15.123098  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:15.137541  213635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:50:15.147676  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:50:15.157224  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:50:15.157238  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:50:15.157273  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:50:15.166484  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:50:15.166525  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:50:15.175831  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:50:15.184692  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:50:15.184731  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:50:15.193871  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:50:15.202947  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:50:15.202993  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:50:15.212451  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:50:15.221477  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:50:15.221512  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:50:15.231277  213635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:50:15.294259  213635 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:50:15.294330  213635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:50:15.422321  213635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:50:15.422476  213635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:50:15.422622  213635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:50:15.596146  213635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:50:15.598667  213635 out.go:235]   - Generating certificates and keys ...
	I0414 17:50:15.598769  213635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:50:15.598859  213635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:50:15.598976  213635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:50:15.599034  213635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:50:15.599148  213635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:50:15.599238  213635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:50:15.599301  213635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:50:15.599353  213635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:50:15.599416  213635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:50:15.599514  213635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:50:15.599573  213635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:50:15.599654  213635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:50:15.664653  213635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:50:15.743669  213635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:50:15.813965  213635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:50:16.089174  213635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:50:16.103702  213635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:50:16.104792  213635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:50:16.104884  213635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:50:16.250169  213635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:50:16.252518  213635 out.go:235]   - Booting up control plane ...
	I0414 17:50:16.252640  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:50:16.262331  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:50:16.263648  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:50:16.264988  213635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:50:16.267648  213635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:50:32.538099  213406 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.761187529s)
	I0414 17:50:32.538165  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:32.553667  213406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:50:32.563284  213406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:50:32.572633  213406 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:50:32.572650  213406 kubeadm.go:157] found existing configuration files:
	
	I0414 17:50:32.572699  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:50:32.581936  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:50:32.581989  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:50:32.592144  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:50:32.600756  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:50:32.600806  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:50:32.610243  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:50:32.619999  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:50:32.620046  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:50:32.629791  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:50:32.639153  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:50:32.639192  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:50:32.648625  213406 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:50:32.799107  213406 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:50:40.718968  213406 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:50:40.719047  213406 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:50:40.719195  213406 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:50:40.719284  213406 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:50:40.719402  213406 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:50:40.719495  213406 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:50:40.720874  213406 out.go:235]   - Generating certificates and keys ...
	I0414 17:50:40.720969  213406 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:50:40.721050  213406 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:50:40.721133  213406 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:50:40.721193  213406 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:50:40.721253  213406 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:50:40.721300  213406 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:50:40.721375  213406 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:50:40.721457  213406 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:50:40.721523  213406 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:50:40.721588  213406 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:50:40.721623  213406 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:50:40.721690  213406 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:50:40.721773  213406 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:50:40.721867  213406 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:50:40.721954  213406 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:50:40.722064  213406 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:50:40.722157  213406 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:50:40.722264  213406 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:50:40.722356  213406 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:50:40.724310  213406 out.go:235]   - Booting up control plane ...
	I0414 17:50:40.724425  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:50:40.724523  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:50:40.724621  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:50:40.724763  213406 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:50:40.724890  213406 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:50:40.724962  213406 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:50:40.725139  213406 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:50:40.725268  213406 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:50:40.725360  213406 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000971318s
	I0414 17:50:40.725463  213406 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:50:40.725555  213406 kubeadm.go:310] [api-check] The API server is healthy after 4.502714129s
	I0414 17:50:40.725689  213406 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:50:40.725884  213406 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:50:40.725975  213406 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:50:40.726178  213406 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-418468 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:50:40.726245  213406 kubeadm.go:310] [bootstrap-token] Using token: 2kykq2.rhxxbbskj81go9zq
	I0414 17:50:40.727271  213406 out.go:235]   - Configuring RBAC rules ...
	I0414 17:50:40.727362  213406 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:50:40.727452  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:50:40.727612  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:50:40.727733  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:50:40.727879  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:50:40.728009  213406 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:50:40.728182  213406 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:50:40.728252  213406 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:50:40.728308  213406 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:50:40.728315  213406 kubeadm.go:310] 
	I0414 17:50:40.728365  213406 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:50:40.728374  213406 kubeadm.go:310] 
	I0414 17:50:40.728444  213406 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:50:40.728450  213406 kubeadm.go:310] 
	I0414 17:50:40.728487  213406 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:50:40.728568  213406 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:50:40.728654  213406 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:50:40.728663  213406 kubeadm.go:310] 
	I0414 17:50:40.728744  213406 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:50:40.728753  213406 kubeadm.go:310] 
	I0414 17:50:40.728829  213406 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:50:40.728841  213406 kubeadm.go:310] 
	I0414 17:50:40.728888  213406 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:50:40.728953  213406 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:50:40.729011  213406 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:50:40.729017  213406 kubeadm.go:310] 
	I0414 17:50:40.729090  213406 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:50:40.729163  213406 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:50:40.729169  213406 kubeadm.go:310] 
	I0414 17:50:40.729277  213406 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2kykq2.rhxxbbskj81go9zq \
	I0414 17:50:40.729434  213406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:50:40.729480  213406 kubeadm.go:310] 	--control-plane 
	I0414 17:50:40.729489  213406 kubeadm.go:310] 
	I0414 17:50:40.729585  213406 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:50:40.729599  213406 kubeadm.go:310] 
	I0414 17:50:40.729712  213406 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2kykq2.rhxxbbskj81go9zq \
	I0414 17:50:40.729880  213406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:50:40.729894  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:50:40.729902  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:50:40.731470  213406 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:50:40.732385  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:50:40.744504  213406 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:50:40.762319  213406 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:50:40.762424  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:40.762443  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-418468 minikube.k8s.io/updated_at=2025_04_14T17_50_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=embed-certs-418468 minikube.k8s.io/primary=true
	I0414 17:50:40.994576  213406 ops.go:34] apiserver oom_adj: -16
	I0414 17:50:40.994598  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:41.495583  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:41.995608  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:42.494670  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:42.995490  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:43.494862  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:43.995730  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:44.495428  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:44.592036  213406 kubeadm.go:1113] duration metric: took 3.829658673s to wait for elevateKubeSystemPrivileges
	I0414 17:50:44.592070  213406 kubeadm.go:394] duration metric: took 5m0.228669417s to StartCluster
	I0414 17:50:44.592092  213406 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:50:44.592185  213406 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:50:44.593289  213406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:50:44.593514  213406 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:50:44.593648  213406 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:50:44.593726  213406 config.go:182] Loaded profile config "embed-certs-418468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:50:44.593753  213406 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-418468"
	I0414 17:50:44.593775  213406 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-418468"
	W0414 17:50:44.593788  213406 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:50:44.593788  213406 addons.go:69] Setting dashboard=true in profile "embed-certs-418468"
	I0414 17:50:44.593793  213406 addons.go:69] Setting metrics-server=true in profile "embed-certs-418468"
	I0414 17:50:44.593809  213406 addons.go:238] Setting addon dashboard=true in "embed-certs-418468"
	I0414 17:50:44.593818  213406 addons.go:238] Setting addon metrics-server=true in "embed-certs-418468"
	W0414 17:50:44.593840  213406 addons.go:247] addon metrics-server should already be in state true
	I0414 17:50:44.593774  213406 addons.go:69] Setting default-storageclass=true in profile "embed-certs-418468"
	I0414 17:50:44.593872  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.593881  213406 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-418468"
	W0414 17:50:44.593819  213406 addons.go:247] addon dashboard should already be in state true
	I0414 17:50:44.593841  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.593949  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.594259  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594294  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594307  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594325  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594382  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594404  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594442  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594407  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.595088  213406 out.go:177] * Verifying Kubernetes components...
	I0414 17:50:44.596521  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:50:44.609533  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0414 17:50:44.609575  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0414 17:50:44.609610  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0414 17:50:44.610072  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610124  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610136  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610594  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610614  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610724  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610728  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610746  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610783  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610997  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611126  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611245  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611287  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.611566  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.611607  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.611855  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.611890  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.612974  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0414 17:50:44.613483  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.614431  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.614549  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.614940  213406 addons.go:238] Setting addon default-storageclass=true in "embed-certs-418468"
	W0414 17:50:44.614962  213406 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:50:44.614990  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.614950  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.615345  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.615388  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.615539  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.615584  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.626843  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0414 17:50:44.627427  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.627885  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.627905  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.628338  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.628542  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.629083  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0414 17:50:44.629405  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.629932  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.629948  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.630188  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0414 17:50:44.630331  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.630425  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.630488  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.630767  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.630792  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.630993  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.631008  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.631289  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.631482  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.632157  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0414 17:50:44.632324  213406 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:50:44.632525  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.633136  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.633159  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.633372  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.633566  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.633657  213406 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:50:44.633675  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:50:44.633693  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.633762  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.634840  213406 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:50:44.635923  213406 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:50:44.636145  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.636955  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:50:44.636970  213406 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:50:44.636984  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.637272  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.637551  213406 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:50:44.637668  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.637698  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.637892  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.638053  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.638220  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.638412  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.638614  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:50:44.638627  213406 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:50:44.638642  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.640489  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.640921  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.640999  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.641118  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.641252  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.641353  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.641461  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.641481  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.641837  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.641860  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.642029  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.642195  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.642338  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.642468  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.649470  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0414 17:50:44.649885  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.650319  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.650332  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.650688  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.650862  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.652217  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.652408  213406 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:50:44.652422  213406 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:50:44.652437  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.654995  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.655423  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.655451  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.655552  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.655680  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.655776  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.655847  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.771042  213406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:50:44.790138  213406 node_ready.go:35] waiting up to 6m0s for node "embed-certs-418468" to be "Ready" ...
	I0414 17:50:44.813392  213406 node_ready.go:49] node "embed-certs-418468" has status "Ready":"True"
	I0414 17:50:44.813417  213406 node_ready.go:38] duration metric: took 23.248396ms for node "embed-certs-418468" to be "Ready" ...
	I0414 17:50:44.813429  213406 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:44.816247  213406 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:44.901629  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:50:44.909788  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:50:44.915477  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:50:44.915498  213406 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:50:44.941111  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:50:44.941132  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:50:44.962200  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:50:44.962221  213406 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:50:45.009756  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:50:45.009781  213406 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:50:45.045994  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:50:45.046027  213406 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:50:45.110797  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:50:45.110830  213406 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:50:45.174495  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:50:45.174532  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:50:45.225055  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:50:45.260868  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:50:45.260897  213406 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:50:45.286443  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.286475  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.286795  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.286859  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.286873  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.286882  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.286824  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.287121  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.287165  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.319685  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.319702  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.320094  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.320125  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.320125  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.348341  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:50:45.348362  213406 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:50:45.425795  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:50:45.425820  213406 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:50:45.460510  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:50:45.460534  213406 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:50:45.539385  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:50:45.539413  213406 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:50:45.581338  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:50:45.899255  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.899281  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.899682  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.899757  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.899701  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.899772  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.899847  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.900112  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.900124  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.625721  213406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.400621394s)
	I0414 17:50:46.625789  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:46.625805  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:46.626108  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:46.626152  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.626167  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:46.626175  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:46.626444  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:46.626480  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:46.626495  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.626506  213406 addons.go:479] Verifying addon metrics-server=true in "embed-certs-418468"
	I0414 17:50:46.825449  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:47.825152  213406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.24373778s)
	I0414 17:50:47.825202  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:47.825214  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:47.825570  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:47.825589  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:47.825599  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:47.825606  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:47.825874  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:47.825893  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:47.827533  213406 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-418468 addons enable metrics-server
	
	I0414 17:50:47.828991  213406 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 17:50:47.830391  213406 addons.go:514] duration metric: took 3.236761674s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 17:50:49.325501  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:51.822230  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:53.821538  213406 pod_ready.go:93] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.821565  213406 pod_ready.go:82] duration metric: took 9.005299134s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.821578  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.825285  213406 pod_ready.go:93] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.825300  213406 pod_ready.go:82] duration metric: took 3.715551ms for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.825308  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.829517  213406 pod_ready.go:93] pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.829531  213406 pod_ready.go:82] duration metric: took 4.218381ms for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.829538  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.835753  213406 pod_ready.go:93] pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.835766  213406 pod_ready.go:82] duration metric: took 6.223543ms for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.835772  213406 pod_ready.go:39] duration metric: took 9.022329744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:53.835786  213406 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:50:53.835832  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:53.867607  213406 api_server.go:72] duration metric: took 9.274050694s to wait for apiserver process to appear ...
	I0414 17:50:53.867636  213406 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:50:53.867656  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:50:53.871486  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 200:
	ok
	I0414 17:50:53.872317  213406 api_server.go:141] control plane version: v1.32.2
	I0414 17:50:53.872338  213406 api_server.go:131] duration metric: took 4.691901ms to wait for apiserver health ...
	I0414 17:50:53.872344  213406 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:50:53.878405  213406 system_pods.go:59] 9 kube-system pods found
	I0414 17:50:53.878425  213406 system_pods.go:61] "coredns-668d6bf9bc-4vrqt" [0738482b-8c0d-4c89-a82f-3dd26a143603] Running
	I0414 17:50:53.878430  213406 system_pods.go:61] "coredns-668d6bf9bc-kbbbq" [24fdcd3d-22b7-4976-85f2-42754178ac49] Running
	I0414 17:50:53.878434  213406 system_pods.go:61] "etcd-embed-certs-418468" [97963194-6254-4aaf-b879-3c4000c86351] Running
	I0414 17:50:53.878437  213406 system_pods.go:61] "kube-apiserver-embed-certs-418468" [8cdb0b46-19da-4d8e-9bd0-7efaa4ef75e6] Running
	I0414 17:50:53.878441  213406 system_pods.go:61] "kube-controller-manager-embed-certs-418468" [7d26ed2b-d015-4015-b248-ccce9e76a6bb] Running
	I0414 17:50:53.878444  213406 system_pods.go:61] "kube-proxy-zqrnn" [b0b54433-bd5d-4c9b-a547-8558e3d66058] Running
	I0414 17:50:53.878447  213406 system_pods.go:61] "kube-scheduler-embed-certs-418468" [5bd1256a-1d95-4e7d-b52e-0208820937f8] Running
	I0414 17:50:53.878454  213406 system_pods.go:61] "metrics-server-f79f97bbb-8blvp" [39557b8d-be28-48b9-ab37-76c22f46341d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:50:53.878461  213406 system_pods.go:61] "storage-provisioner" [136247c3-315f-43ad-a40d-080ad60a6b45] Running
	I0414 17:50:53.878469  213406 system_pods.go:74] duration metric: took 6.120329ms to wait for pod list to return data ...
	I0414 17:50:53.878478  213406 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:50:53.880531  213406 default_sa.go:45] found service account: "default"
	I0414 17:50:53.880549  213406 default_sa.go:55] duration metric: took 2.064832ms for default service account to be created ...
	I0414 17:50:53.880558  213406 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:50:54.020249  213406 system_pods.go:86] 9 kube-system pods found
	I0414 17:50:54.020276  213406 system_pods.go:89] "coredns-668d6bf9bc-4vrqt" [0738482b-8c0d-4c89-a82f-3dd26a143603] Running
	I0414 17:50:54.020282  213406 system_pods.go:89] "coredns-668d6bf9bc-kbbbq" [24fdcd3d-22b7-4976-85f2-42754178ac49] Running
	I0414 17:50:54.020286  213406 system_pods.go:89] "etcd-embed-certs-418468" [97963194-6254-4aaf-b879-3c4000c86351] Running
	I0414 17:50:54.020290  213406 system_pods.go:89] "kube-apiserver-embed-certs-418468" [8cdb0b46-19da-4d8e-9bd0-7efaa4ef75e6] Running
	I0414 17:50:54.020295  213406 system_pods.go:89] "kube-controller-manager-embed-certs-418468" [7d26ed2b-d015-4015-b248-ccce9e76a6bb] Running
	I0414 17:50:54.020298  213406 system_pods.go:89] "kube-proxy-zqrnn" [b0b54433-bd5d-4c9b-a547-8558e3d66058] Running
	I0414 17:50:54.020301  213406 system_pods.go:89] "kube-scheduler-embed-certs-418468" [5bd1256a-1d95-4e7d-b52e-0208820937f8] Running
	I0414 17:50:54.020307  213406 system_pods.go:89] "metrics-server-f79f97bbb-8blvp" [39557b8d-be28-48b9-ab37-76c22f46341d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:50:54.020312  213406 system_pods.go:89] "storage-provisioner" [136247c3-315f-43ad-a40d-080ad60a6b45] Running
	I0414 17:50:54.020323  213406 system_pods.go:126] duration metric: took 139.758195ms to wait for k8s-apps to be running ...
	I0414 17:50:54.020333  213406 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:50:54.020383  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:54.042446  213406 system_svc.go:56] duration metric: took 22.104112ms WaitForService to wait for kubelet
	I0414 17:50:54.042479  213406 kubeadm.go:582] duration metric: took 9.448925946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:50:54.042499  213406 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:50:54.219590  213406 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:50:54.219612  213406 node_conditions.go:123] node cpu capacity is 2
	I0414 17:50:54.219623  213406 node_conditions.go:105] duration metric: took 177.119005ms to run NodePressure ...
	I0414 17:50:54.219634  213406 start.go:241] waiting for startup goroutines ...
	I0414 17:50:54.219642  213406 start.go:246] waiting for cluster config update ...
	I0414 17:50:54.219655  213406 start.go:255] writing updated cluster config ...
	I0414 17:50:54.219959  213406 ssh_runner.go:195] Run: rm -f paused
	I0414 17:50:54.282458  213406 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:50:54.284727  213406 out.go:177] * Done! kubectl is now configured to use "embed-certs-418468" cluster and "default" namespace by default
	I0414 17:50:56.269443  213635 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:50:56.270353  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:50:56.270523  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:01.271007  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:01.271253  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:11.271837  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:11.272049  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:31.273087  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:31.273315  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:11.275552  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:11.275856  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:11.275878  213635 kubeadm.go:310] 
	I0414 17:52:11.275927  213635 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:52:11.275981  213635 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:52:11.275991  213635 kubeadm.go:310] 
	I0414 17:52:11.276038  213635 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:52:11.276092  213635 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:52:11.276213  213635 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:52:11.276222  213635 kubeadm.go:310] 
	I0414 17:52:11.276375  213635 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:52:11.276431  213635 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:52:11.276482  213635 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:52:11.276502  213635 kubeadm.go:310] 
	I0414 17:52:11.276617  213635 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:52:11.276722  213635 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:52:11.276733  213635 kubeadm.go:310] 
	I0414 17:52:11.276827  213635 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:52:11.276902  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:52:11.276994  213635 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:52:11.277119  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:52:11.277137  213635 kubeadm.go:310] 
	I0414 17:52:11.277720  213635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:52:11.277871  213635 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:52:11.277974  213635 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 17:52:11.278218  213635 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 17:52:11.278258  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:52:11.738009  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:52:11.752929  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:52:11.762849  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:52:11.762865  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:52:11.762901  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:52:11.772188  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:52:11.772240  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:52:11.781466  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:52:11.790582  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:52:11.790624  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:52:11.799766  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:52:11.808443  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:52:11.808481  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:52:11.817544  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:52:11.826418  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:52:11.826464  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:52:11.835946  213635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:52:11.910031  213635 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:52:11.910113  213635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:52:12.048882  213635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:52:12.049032  213635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:52:12.049160  213635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:52:12.216124  213635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:52:12.218841  213635 out.go:235]   - Generating certificates and keys ...
	I0414 17:52:12.218938  213635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:52:12.219030  213635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:52:12.219153  213635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:52:12.219244  213635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:52:12.219342  213635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:52:12.219420  213635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:52:12.219507  213635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:52:12.219612  213635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:52:12.219690  213635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:52:12.219802  213635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:52:12.219867  213635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:52:12.219917  213635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:52:12.485118  213635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:52:12.699901  213635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:52:12.798407  213635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:52:12.941803  213635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:52:12.964937  213635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:52:12.965897  213635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:52:12.966059  213635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:52:13.109607  213635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:52:13.112109  213635 out.go:235]   - Booting up control plane ...
	I0414 17:52:13.112248  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:52:13.115664  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:52:13.117940  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:52:13.119128  213635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:52:13.123525  213635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:52:53.126895  213635 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:52:53.127019  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:53.127237  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:58.127800  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:58.127997  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:53:08.128675  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:53:08.128878  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:53:28.129416  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:53:28.129642  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:54:08.127998  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:54:08.128303  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:54:08.128326  213635 kubeadm.go:310] 
	I0414 17:54:08.128362  213635 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:54:08.128505  213635 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:54:08.128527  213635 kubeadm.go:310] 
	I0414 17:54:08.128595  213635 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:54:08.128640  213635 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:54:08.128791  213635 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:54:08.128814  213635 kubeadm.go:310] 
	I0414 17:54:08.128946  213635 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:54:08.128997  213635 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:54:08.129043  213635 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:54:08.129052  213635 kubeadm.go:310] 
	I0414 17:54:08.129167  213635 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:54:08.129296  213635 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:54:08.129314  213635 kubeadm.go:310] 
	I0414 17:54:08.129479  213635 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:54:08.129615  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:54:08.129706  213635 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:54:08.129814  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:54:08.129824  213635 kubeadm.go:310] 
	I0414 17:54:08.130345  213635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:54:08.130443  213635 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:54:08.130555  213635 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 17:54:08.130646  213635 kubeadm.go:394] duration metric: took 8m1.792756267s to StartCluster
	I0414 17:54:08.130721  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:54:08.130802  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:54:08.175207  213635 cri.go:89] found id: ""
	I0414 17:54:08.175243  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.175251  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:54:08.175257  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:54:08.175311  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:54:08.209345  213635 cri.go:89] found id: ""
	I0414 17:54:08.209370  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.209377  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:54:08.209382  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:54:08.209428  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:54:08.244901  213635 cri.go:89] found id: ""
	I0414 17:54:08.244937  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.244946  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:54:08.244952  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:54:08.245022  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:54:08.279974  213635 cri.go:89] found id: ""
	I0414 17:54:08.279999  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.280006  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:54:08.280011  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:54:08.280065  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:54:08.312666  213635 cri.go:89] found id: ""
	I0414 17:54:08.312691  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.312701  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:54:08.312708  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:54:08.312761  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:54:08.345579  213635 cri.go:89] found id: ""
	I0414 17:54:08.345609  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.345619  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:54:08.345627  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:54:08.345682  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:54:08.377810  213635 cri.go:89] found id: ""
	I0414 17:54:08.377844  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.377853  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:54:08.377858  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:54:08.377900  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:54:08.409648  213635 cri.go:89] found id: ""
	I0414 17:54:08.409673  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.409681  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:54:08.409697  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:54:08.409708  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:54:08.422905  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:54:08.422930  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:54:08.495193  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:54:08.495217  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:54:08.495232  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:54:08.603072  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:54:08.603108  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:54:08.640028  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:54:08.640058  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0414 17:54:08.690480  213635 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 17:54:08.690537  213635 out.go:270] * 
	W0414 17:54:08.690590  213635 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:54:08.690605  213635 out.go:270] * 
	W0414 17:54:08.691392  213635 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 17:54:08.694565  213635 out.go:201] 
	W0414 17:54:08.695675  213635 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:54:08.695709  213635 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 17:54:08.695724  213635 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 17:54:08.697684  213635 out.go:201] 
	
	
	==> CRI-O <==
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.005912777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744653250005896047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=636c806d-8089-4565-8fb3-aa9fc502ffa3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.006599837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fd03886-eefa-4b93-9a81-266e1be8645e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.006645470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fd03886-eefa-4b93-9a81-266e1be8645e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.006675472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8fd03886-eefa-4b93-9a81-266e1be8645e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.035366782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=353f30bd-9ef0-4960-a6fb-18272876fb44 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.035422876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=353f30bd-9ef0-4960-a6fb-18272876fb44 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.036591490Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b14f493-7e7c-49f3-8df9-0a68cd0897e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.036970493Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744653250036951726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b14f493-7e7c-49f3-8df9-0a68cd0897e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.037600411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61af977f-9ac7-4798-a8f2-90b70e971f08 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.037664770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61af977f-9ac7-4798-a8f2-90b70e971f08 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.037698870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=61af977f-9ac7-4798-a8f2-90b70e971f08 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.067675140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9136b846-8de2-4de0-8d53-b555ca3df3d3 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.067754071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9136b846-8de2-4de0-8d53-b555ca3df3d3 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.068977263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f62e231e-906b-4c56-8faf-f2872833aae9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.069309908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744653250069291361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f62e231e-906b-4c56-8faf-f2872833aae9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.069928689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=531f2f72-dddb-4f26-9ba2-5e0bf000c90a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.069995779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=531f2f72-dddb-4f26-9ba2-5e0bf000c90a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.070028333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=531f2f72-dddb-4f26-9ba2-5e0bf000c90a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.101184562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0e5376b-f09c-4b01-a7db-74762434cc72 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.101257007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0e5376b-f09c-4b01-a7db-74762434cc72 name=/runtime.v1.RuntimeService/Version
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.102620354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=726e093f-140a-40d7-9068-f06e44d06b57 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.103018872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744653250103000739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=726e093f-140a-40d7-9068-f06e44d06b57 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.103594741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e528565-1d24-4ebf-9672-fb8b028d6736 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.103656743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e528565-1d24-4ebf-9672-fb8b028d6736 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 17:54:10 old-k8s-version-768580 crio[629]: time="2025-04-14 17:54:10.103688293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6e528565-1d24-4ebf-9672-fb8b028d6736 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 17:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055960] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049332] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.224319] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.838807] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.420171] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.914151] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.065125] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060469] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.182225] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.143184] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.256654] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[Apr14 17:46] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.073476] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.861304] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[ +14.344832] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 17:50] systemd-fstab-generator[5080]: Ignoring "noauto" option for root device
	[Apr14 17:52] systemd-fstab-generator[5361]: Ignoring "noauto" option for root device
	[  +0.059704] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 17:54:10 up 8 min,  0 users,  load average: 0.16, 0.15, 0.09
	Linux old-k8s-version-768580 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000b240a0, 0xc000a20c30, 0x23, 0xc00040d840)
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]: created by internal/singleflight.(*Group).DoChan
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]: goroutine 169 [syscall]:
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]: net._C2func_getaddrinfo(0xc000ac1800, 0x0, 0xc000e32060, 0xc000122ef8, 0x0, 0x0, 0x0)
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]:         _cgo_gotypes.go:94 +0x55
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]: net.cgoLookupIPCNAME.func1(0xc000ac1800, 0x20, 0x20, 0xc000e32060, 0xc000122ef8, 0x0, 0xc000a566a0, 0x57a492)
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000a20c00, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]: net.cgoIPLookup(0xc000a41c80, 0x48ab5d6, 0x3, 0xc000a20c00, 0x1f)
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]: created by net.cgoLookupIP
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5546]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Apr 14 17:54:08 old-k8s-version-768580 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 17:54:08 old-k8s-version-768580 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 17:54:08 old-k8s-version-768580 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 14 17:54:08 old-k8s-version-768580 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 17:54:08 old-k8s-version-768580 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5613]: I0414 17:54:08.843213    5613 server.go:416] Version: v1.20.0
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5613]: I0414 17:54:08.843906    5613 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5613]: I0414 17:54:08.846858    5613 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5613]: W0414 17:54:08.847893    5613 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 14 17:54:08 old-k8s-version-768580 kubelet[5613]: I0414 17:54:08.848379    5613 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 2 (222.290674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-768580" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (527.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:54:13.078705  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/no-preload-721806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:54:20.308744  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/default-k8s-diff-port-061428/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:54:42.840688  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:56:13.897966  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:56:29.218000  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/no-preload-721806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:56:36.447448  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/default-k8s-diff-port-061428/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:56:45.781867  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:56:56.920728  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/no-preload-721806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:57:04.150593  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/default-k8s-diff-port-061428/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:57:08.945757  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:57:23.149376  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:57:36.962839  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:57:52.062976  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:58:08.845551  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:58:31.089632  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:58:45.219906  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:58:46.214246  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:59:08.605478  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:59:15.125578  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 17:59:42.841105  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:00:08.281578  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:00:31.669946  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:01:05.906681  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:01:13.897606  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:01:29.217892  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/no-preload-721806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:01:34.172369  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:01:36.446823  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/default-k8s-diff-port-061428/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:01:45.782028  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:02:08.945793  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:02:23.150248  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:02:52.063279  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 2 (214.332785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-768580" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 2 (207.309731ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-768580 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-768580 logs -n 25: (1.098907151s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-061428       | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-418468            | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-768580        | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-418468                 | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-768580                              | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-768580             | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-768580                              | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-061428                           | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	| image   | no-preload-721806 image list                           | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	| delete  | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	| image   | embed-certs-418468 image list                          | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	| delete  | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:45:23
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:45:23.282546  213635 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:45:23.282636  213635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:23.282647  213635 out.go:358] Setting ErrFile to fd 2...
	I0414 17:45:23.282663  213635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:23.282871  213635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:45:23.283429  213635 out.go:352] Setting JSON to false
	I0414 17:45:23.284348  213635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8821,"bootTime":1744643902,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:45:23.284402  213635 start.go:139] virtualization: kvm guest
	I0414 17:45:23.286322  213635 out.go:177] * [old-k8s-version-768580] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:45:23.287426  213635 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:45:23.287431  213635 notify.go:220] Checking for updates...
	I0414 17:45:23.289881  213635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:45:23.291059  213635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:45:23.292002  213635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:45:23.293350  213635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:45:23.294814  213635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:45:23.296431  213635 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:45:23.296945  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:23.296998  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:23.313119  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0414 17:45:23.313580  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:23.314124  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:23.314148  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:23.314493  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:23.314664  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:23.316572  213635 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 17:45:23.317553  213635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:45:23.317841  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:23.317876  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:23.333791  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0414 17:45:23.334298  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:23.334832  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:23.334859  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:23.335206  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:23.335410  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:23.372523  213635 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 17:45:23.373766  213635 start.go:297] selected driver: kvm2
	I0414 17:45:23.373785  213635 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:23.373971  213635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:45:23.374697  213635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:45:23.374756  213635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:45:23.390328  213635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:45:23.390891  213635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:45:23.390939  213635 cni.go:84] Creating CNI manager for ""
	I0414 17:45:23.390997  213635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:23.391057  213635 start.go:340] cluster config:
	{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:23.391177  213635 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:45:23.393503  213635 out.go:177] * Starting "old-k8s-version-768580" primary control-plane node in "old-k8s-version-768580" cluster
	I0414 17:45:18.829481  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Start
	I0414 17:45:18.829626  213406 main.go:141] libmachine: (embed-certs-418468) starting domain...
	I0414 17:45:18.829645  213406 main.go:141] libmachine: (embed-certs-418468) ensuring networks are active...
	I0414 17:45:18.830375  213406 main.go:141] libmachine: (embed-certs-418468) Ensuring network default is active
	I0414 17:45:18.830697  213406 main.go:141] libmachine: (embed-certs-418468) Ensuring network mk-embed-certs-418468 is active
	I0414 17:45:18.831060  213406 main.go:141] libmachine: (embed-certs-418468) getting domain XML...
	I0414 17:45:18.831881  213406 main.go:141] libmachine: (embed-certs-418468) creating domain...
	I0414 17:45:20.130585  213406 main.go:141] libmachine: (embed-certs-418468) waiting for IP...
	I0414 17:45:20.131429  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.131906  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.131976  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.131884  213441 retry.go:31] will retry after 192.442813ms: waiting for domain to come up
	I0414 17:45:20.326250  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.326808  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.326847  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.326777  213441 retry.go:31] will retry after 380.44265ms: waiting for domain to come up
	I0414 17:45:20.709212  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.709718  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.709747  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.709659  213441 retry.go:31] will retry after 412.048423ms: waiting for domain to come up
	I0414 17:45:21.123129  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:21.123522  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:21.123544  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:21.123486  213441 retry.go:31] will retry after 384.561435ms: waiting for domain to come up
	I0414 17:45:21.510029  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:21.510559  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:21.510591  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:21.510521  213441 retry.go:31] will retry after 501.73701ms: waiting for domain to come up
	I0414 17:45:22.014298  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:22.014882  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:22.014914  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:22.014842  213441 retry.go:31] will retry after 757.183938ms: waiting for domain to come up
	I0414 17:45:22.773705  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:22.774323  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:22.774350  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:22.774269  213441 retry.go:31] will retry after 986.137988ms: waiting for domain to come up
	I0414 17:45:20.888278  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:23.386664  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:24.646290  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:27.145214  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:23.394590  213635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:45:23.394621  213635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 17:45:23.394628  213635 cache.go:56] Caching tarball of preloaded images
	I0414 17:45:23.394721  213635 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:45:23.394735  213635 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 17:45:23.394836  213635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:45:23.395013  213635 start.go:360] acquireMachinesLock for old-k8s-version-768580: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:45:23.762349  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:23.762955  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:23.762979  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:23.762917  213441 retry.go:31] will retry after 1.10793688s: waiting for domain to come up
	I0414 17:45:24.872355  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:24.872838  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:24.872868  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:24.872798  213441 retry.go:31] will retry after 1.289889749s: waiting for domain to come up
	I0414 17:45:26.163838  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:26.164300  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:26.164340  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:26.164276  213441 retry.go:31] will retry after 1.779294897s: waiting for domain to come up
	I0414 17:45:27.946417  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:27.946918  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:27.946955  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:27.946893  213441 retry.go:31] will retry after 1.873070528s: waiting for domain to come up
	I0414 17:45:25.887339  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:27.888458  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:30.386702  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:29.147468  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:31.647410  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:29.821493  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:29.822082  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:29.822114  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:29.822017  213441 retry.go:31] will retry after 2.200299666s: waiting for domain to come up
	I0414 17:45:32.024275  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:32.024774  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:32.024804  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:32.024731  213441 retry.go:31] will retry after 4.490034828s: waiting for domain to come up
	I0414 17:45:32.885679  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:34.886662  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:34.145579  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:36.146382  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.146697  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.262514  213635 start.go:364] duration metric: took 14.867477628s to acquireMachinesLock for "old-k8s-version-768580"
	I0414 17:45:38.262567  213635 start.go:96] Skipping create...Using existing machine configuration
	I0414 17:45:38.262576  213635 fix.go:54] fixHost starting: 
	I0414 17:45:38.262931  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:38.262975  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:38.282724  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0414 17:45:38.283218  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:38.283779  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:38.283810  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:38.284194  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:38.284403  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:38.284564  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetState
	I0414 17:45:38.285903  213635 fix.go:112] recreateIfNeeded on old-k8s-version-768580: state=Stopped err=<nil>
	I0414 17:45:38.285937  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	W0414 17:45:38.286051  213635 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 17:45:38.287537  213635 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-768580" ...
	I0414 17:45:36.517497  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.518002  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has current primary IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.518029  213406 main.go:141] libmachine: (embed-certs-418468) found domain IP: 192.168.50.199
	I0414 17:45:36.518042  213406 main.go:141] libmachine: (embed-certs-418468) reserving static IP address...
	I0414 17:45:36.518423  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "embed-certs-418468", mac: "52:54:00:2f:33:03", ip: "192.168.50.199"} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.518454  213406 main.go:141] libmachine: (embed-certs-418468) DBG | skip adding static IP to network mk-embed-certs-418468 - found existing host DHCP lease matching {name: "embed-certs-418468", mac: "52:54:00:2f:33:03", ip: "192.168.50.199"}
	I0414 17:45:36.518467  213406 main.go:141] libmachine: (embed-certs-418468) reserved static IP address 192.168.50.199 for domain embed-certs-418468
	I0414 17:45:36.518485  213406 main.go:141] libmachine: (embed-certs-418468) waiting for SSH...
	I0414 17:45:36.518500  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Getting to WaitForSSH function...
	I0414 17:45:36.520360  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.520616  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.520653  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.520758  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Using SSH client type: external
	I0414 17:45:36.520776  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa (-rw-------)
	I0414 17:45:36.520809  213406 main.go:141] libmachine: (embed-certs-418468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:45:36.520821  213406 main.go:141] libmachine: (embed-certs-418468) DBG | About to run SSH command:
	I0414 17:45:36.520831  213406 main.go:141] libmachine: (embed-certs-418468) DBG | exit 0
	I0414 17:45:36.649576  213406 main.go:141] libmachine: (embed-certs-418468) DBG | SSH cmd err, output: <nil>: 
	I0414 17:45:36.649973  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetConfigRaw
	I0414 17:45:36.650596  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:36.653078  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.653409  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.653438  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.653654  213406 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/config.json ...
	I0414 17:45:36.653850  213406 machine.go:93] provisionDockerMachine start ...
	I0414 17:45:36.653883  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:36.654093  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.656193  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.656501  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.656527  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.656658  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.656818  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.656950  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.657070  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.657214  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.657429  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.657439  213406 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:45:36.765740  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 17:45:36.765765  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:36.766013  213406 buildroot.go:166] provisioning hostname "embed-certs-418468"
	I0414 17:45:36.766041  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:36.766237  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.768833  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.769137  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.769162  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.769335  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.769500  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.769623  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.769731  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.769886  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.770105  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.770120  213406 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-418468 && echo "embed-certs-418468" | sudo tee /etc/hostname
	I0414 17:45:36.893279  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-418468
	
	I0414 17:45:36.893301  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.896024  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.896386  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.896415  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.896583  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.896764  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.896953  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.897101  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.897270  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.897545  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.897570  213406 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-418468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-418468/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-418468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:45:37.024782  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:45:37.024811  213406 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:45:37.024840  213406 buildroot.go:174] setting up certificates
	I0414 17:45:37.024850  213406 provision.go:84] configureAuth start
	I0414 17:45:37.024858  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:37.025122  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:37.027788  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.028176  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.028213  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.028409  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.030616  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.030956  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.030981  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.031177  213406 provision.go:143] copyHostCerts
	I0414 17:45:37.031234  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:45:37.031248  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:45:37.031310  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:45:37.031401  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:45:37.031409  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:45:37.031435  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:45:37.031497  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:45:37.031504  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:45:37.031523  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:45:37.031647  213406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.embed-certs-418468 san=[127.0.0.1 192.168.50.199 embed-certs-418468 localhost minikube]
	I0414 17:45:37.627895  213406 provision.go:177] copyRemoteCerts
	I0414 17:45:37.627953  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:45:37.627976  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.630648  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.630947  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.630970  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.631155  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:37.631352  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.631526  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:37.631687  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:37.716473  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 17:45:37.739929  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:45:37.762662  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 17:45:37.785121  213406 provision.go:87] duration metric: took 760.257482ms to configureAuth
	I0414 17:45:37.785152  213406 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:45:37.785381  213406 config.go:182] Loaded profile config "embed-certs-418468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:45:37.785455  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.788353  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.788678  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.788705  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.788883  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:37.789017  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.789194  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.789409  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:37.789591  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:37.789865  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:37.789886  213406 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:45:38.021469  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:45:38.021530  213406 machine.go:96] duration metric: took 1.367637028s to provisionDockerMachine
	I0414 17:45:38.021548  213406 start.go:293] postStartSetup for "embed-certs-418468" (driver="kvm2")
	I0414 17:45:38.021567  213406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:45:38.021593  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.021949  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:45:38.021980  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.024762  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.025141  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.025169  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.025357  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.025523  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.025702  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.025862  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.112512  213406 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:45:38.116757  213406 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:45:38.116780  213406 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:45:38.116832  213406 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:45:38.116909  213406 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:45:38.116994  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:45:38.126428  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:38.149529  213406 start.go:296] duration metric: took 127.965801ms for postStartSetup
	I0414 17:45:38.149559  213406 fix.go:56] duration metric: took 19.339332592s for fixHost
	I0414 17:45:38.149597  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.152452  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.152857  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.152886  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.153029  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.153208  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.153357  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.153527  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.153719  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:38.153980  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:38.153992  213406 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:45:38.262398  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652738.233356501
	
	I0414 17:45:38.262419  213406 fix.go:216] guest clock: 1744652738.233356501
	I0414 17:45:38.262426  213406 fix.go:229] Guest: 2025-04-14 17:45:38.233356501 +0000 UTC Remote: 2025-04-14 17:45:38.149564097 +0000 UTC m=+19.473974968 (delta=83.792404ms)
	I0414 17:45:38.262443  213406 fix.go:200] guest clock delta is within tolerance: 83.792404ms
	I0414 17:45:38.262448  213406 start.go:83] releasing machines lock for "embed-certs-418468", held for 19.452231962s
	I0414 17:45:38.262473  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.262756  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:38.265776  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.266164  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.266194  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.266350  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.266870  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.267040  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.267139  213406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:45:38.267189  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.267240  213406 ssh_runner.go:195] Run: cat /version.json
	I0414 17:45:38.267261  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.269779  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270093  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.270121  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270142  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270286  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.270481  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.270582  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.270601  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270633  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.270844  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.270834  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.270994  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.271141  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.271286  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.360262  213406 ssh_runner.go:195] Run: systemctl --version
	I0414 17:45:38.384263  213406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:45:38.531682  213406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:45:38.539705  213406 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:45:38.539793  213406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:45:38.557292  213406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:45:38.557314  213406 start.go:495] detecting cgroup driver to use...
	I0414 17:45:38.557377  213406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:45:38.573739  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:45:38.587350  213406 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:45:38.587392  213406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:45:38.601142  213406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:45:38.615569  213406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:45:38.729585  213406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:45:38.866071  213406 docker.go:233] disabling docker service ...
	I0414 17:45:38.866151  213406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:45:38.881173  213406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:45:38.895808  213406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:45:39.055748  213406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:45:39.185218  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:45:39.200427  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:45:39.223755  213406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 17:45:39.223823  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.235661  213406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:45:39.235737  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.248125  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.260302  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.270988  213406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:45:39.281488  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.293593  213406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.314797  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.325696  213406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:45:39.334593  213406 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:45:39.334634  213406 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:45:39.347505  213406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:45:39.357965  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:39.484049  213406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:45:39.597745  213406 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:45:39.597853  213406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:45:39.602871  213406 start.go:563] Will wait 60s for crictl version
	I0414 17:45:39.602925  213406 ssh_runner.go:195] Run: which crictl
	I0414 17:45:39.606796  213406 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:45:39.649955  213406 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:45:39.650046  213406 ssh_runner.go:195] Run: crio --version
	I0414 17:45:39.681673  213406 ssh_runner.go:195] Run: crio --version
	I0414 17:45:39.710974  213406 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 17:45:36.888095  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:39.387438  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:40.148510  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:42.647398  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.288730  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .Start
	I0414 17:45:38.288903  213635 main.go:141] libmachine: (old-k8s-version-768580) starting domain...
	I0414 17:45:38.288928  213635 main.go:141] libmachine: (old-k8s-version-768580) ensuring networks are active...
	I0414 17:45:38.289671  213635 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network default is active
	I0414 17:45:38.290082  213635 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network mk-old-k8s-version-768580 is active
	I0414 17:45:38.290509  213635 main.go:141] libmachine: (old-k8s-version-768580) getting domain XML...
	I0414 17:45:38.291270  213635 main.go:141] libmachine: (old-k8s-version-768580) creating domain...
	I0414 17:45:39.584359  213635 main.go:141] libmachine: (old-k8s-version-768580) waiting for IP...
	I0414 17:45:39.585518  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:39.586108  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:39.586195  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:39.586107  213733 retry.go:31] will retry after 251.417692ms: waiting for domain to come up
	I0414 17:45:39.839778  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:39.840371  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:39.840397  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:39.840338  213733 retry.go:31] will retry after 258.330025ms: waiting for domain to come up
	I0414 17:45:40.100989  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:40.101667  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:40.101696  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:40.101631  213733 retry.go:31] will retry after 334.368733ms: waiting for domain to come up
	I0414 17:45:40.437266  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:40.438218  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:40.438251  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:40.438188  213733 retry.go:31] will retry after 588.313555ms: waiting for domain to come up
	I0414 17:45:41.027969  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:41.028685  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:41.028713  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:41.028667  213733 retry.go:31] will retry after 582.787602ms: waiting for domain to come up
	I0414 17:45:41.613756  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:41.614424  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:41.614476  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:41.614383  213733 retry.go:31] will retry after 695.01431ms: waiting for domain to come up
	I0414 17:45:42.311573  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:42.312134  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:42.312168  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:42.312092  213733 retry.go:31] will retry after 1.050124039s: waiting for domain to come up
	I0414 17:45:39.712262  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:39.715292  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:39.715742  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:39.715790  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:39.715889  213406 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 17:45:39.720056  213406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:39.736486  213406 kubeadm.go:883] updating cluster {Name:embed-certs-418468 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-
418468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:45:39.736610  213406 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:45:39.736663  213406 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:39.774478  213406 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 17:45:39.774571  213406 ssh_runner.go:195] Run: which lz4
	I0414 17:45:39.778933  213406 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:45:39.783254  213406 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:45:39.783294  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 17:45:41.221460  213406 crio.go:462] duration metric: took 1.44257368s to copy over tarball
	I0414 17:45:41.221534  213406 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:45:43.485855  213406 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.264254914s)
	I0414 17:45:43.485888  213406 crio.go:469] duration metric: took 2.264398504s to extract the tarball
	I0414 17:45:43.485899  213406 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:45:43.525207  213406 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:43.573036  213406 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:45:43.573060  213406 cache_images.go:84] Images are preloaded, skipping loading
	I0414 17:45:43.573068  213406 kubeadm.go:934] updating node { 192.168.50.199 8443 v1.32.2 crio true true} ...
	I0414 17:45:43.573156  213406 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-418468 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-418468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:45:43.573214  213406 ssh_runner.go:195] Run: crio config
	I0414 17:45:43.633728  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:45:43.633753  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:43.633765  213406 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:45:43.633791  213406 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.199 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-418468 NodeName:embed-certs-418468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 17:45:43.633949  213406 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-418468"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.199"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.199"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:45:43.634013  213406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 17:45:43.644883  213406 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:45:43.644955  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:45:43.658054  213406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0414 17:45:43.678542  213406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:45:43.698007  213406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0414 17:45:41.888968  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:44.387515  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:45.147015  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:47.147667  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:43.363977  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:43.364593  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:43.364642  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:43.364568  213733 retry.go:31] will retry after 1.011314768s: waiting for domain to come up
	I0414 17:45:44.377753  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:44.378268  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:44.378293  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:44.378225  213733 retry.go:31] will retry after 1.856494831s: waiting for domain to come up
	I0414 17:45:46.237268  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:46.237851  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:46.237881  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:46.237785  213733 retry.go:31] will retry after 1.749079149s: waiting for domain to come up
	I0414 17:45:47.990039  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:47.990637  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:47.990670  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:47.990601  213733 retry.go:31] will retry after 2.63350321s: waiting for domain to come up
	I0414 17:45:43.715966  213406 ssh_runner.go:195] Run: grep 192.168.50.199	control-plane.minikube.internal$ /etc/hosts
	I0414 17:45:43.720022  213406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:43.733445  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:43.867405  213406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:45:43.885300  213406 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468 for IP: 192.168.50.199
	I0414 17:45:43.885324  213406 certs.go:194] generating shared ca certs ...
	I0414 17:45:43.885345  213406 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:45:43.885512  213406 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:45:43.885584  213406 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:45:43.885601  213406 certs.go:256] generating profile certs ...
	I0414 17:45:43.885706  213406 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/client.key
	I0414 17:45:43.885782  213406 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.key.3a11cdbe
	I0414 17:45:43.885845  213406 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.key
	I0414 17:45:43.885996  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:45:43.886046  213406 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:45:43.886061  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:45:43.886092  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:45:43.886126  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:45:43.886156  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:45:43.886211  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:43.886983  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:45:43.924611  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:45:43.964084  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:45:43.987697  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:45:44.015900  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 17:45:44.040754  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:45:44.075038  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:45:44.099117  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 17:45:44.122932  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:45:44.147023  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:45:44.173790  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:45:44.196542  213406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:45:44.214709  213406 ssh_runner.go:195] Run: openssl version
	I0414 17:45:44.220535  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:45:44.235491  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.240204  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.240265  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.246067  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:45:44.257501  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:45:44.269005  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.273740  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.273793  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.279740  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:45:44.291167  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:45:44.302992  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.307551  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.307597  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.313737  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:45:44.324505  213406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:45:44.328835  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:45:44.334805  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:45:44.340659  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:45:44.346307  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:45:44.351874  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:45:44.357745  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:45:44.363409  213406 kubeadm.go:392] StartCluster: {Name:embed-certs-418468 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-418
468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:44.363503  213406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:45:44.363553  213406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:45:44.409542  213406 cri.go:89] found id: ""
	I0414 17:45:44.409612  213406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:45:44.421483  213406 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 17:45:44.421502  213406 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 17:45:44.421553  213406 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 17:45:44.432611  213406 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:45:44.433322  213406 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-418468" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:45:44.433670  213406 kubeconfig.go:62] /home/jenkins/minikube-integration/20349-149500/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-418468" cluster setting kubeconfig missing "embed-certs-418468" context setting]
	I0414 17:45:44.434350  213406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:45:44.435960  213406 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 17:45:44.447295  213406 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.199
	I0414 17:45:44.447335  213406 kubeadm.go:1160] stopping kube-system containers ...
	I0414 17:45:44.447349  213406 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 17:45:44.447402  213406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:45:44.483842  213406 cri.go:89] found id: ""
	I0414 17:45:44.483928  213406 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:45:44.501457  213406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:45:44.511344  213406 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:45:44.511360  213406 kubeadm.go:157] found existing configuration files:
	
	I0414 17:45:44.511408  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:45:44.520512  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:45:44.520561  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:45:44.530434  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:45:44.539618  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:45:44.539668  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:45:44.548947  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:45:44.558310  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:45:44.558380  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:45:44.567691  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:45:44.576750  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:45:44.576795  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:45:44.586464  213406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:45:44.598983  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:44.718594  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:45.695980  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:45.996480  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:46.072138  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:46.200254  213406 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:45:46.200333  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:46.701083  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:47.201283  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:47.253490  213406 api_server.go:72] duration metric: took 1.053227948s to wait for apiserver process to appear ...
	I0414 17:45:47.253532  213406 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:45:47.253571  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:47.254266  213406 api_server.go:269] stopped: https://192.168.50.199:8443/healthz: Get "https://192.168.50.199:8443/healthz": dial tcp 192.168.50.199:8443: connect: connection refused
	I0414 17:45:47.753924  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:46.704844  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:48.887470  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:50.393514  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:45:50.393621  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:45:50.393644  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:50.433133  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:45:50.433159  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:45:50.753606  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:50.758868  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:45:50.758895  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:45:51.254607  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:51.259648  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:45:51.259677  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:45:51.754419  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:51.762365  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 200:
	ok
	I0414 17:45:51.774330  213406 api_server.go:141] control plane version: v1.32.2
	I0414 17:45:51.774361  213406 api_server.go:131] duration metric: took 4.520816141s to wait for apiserver health ...
	I0414 17:45:51.774374  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:45:51.774383  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:51.775864  213406 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:45:49.648757  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:52.147610  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:50.626885  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:50.627340  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:50.627368  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:50.627294  213733 retry.go:31] will retry after 2.57658473s: waiting for domain to come up
	I0414 17:45:53.207057  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:53.207562  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:53.207590  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:53.207520  213733 retry.go:31] will retry after 3.448748827s: waiting for domain to come up
	I0414 17:45:51.777039  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:45:51.806959  213406 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:45:51.836511  213406 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:45:51.848209  213406 system_pods.go:59] 8 kube-system pods found
	I0414 17:45:51.848270  213406 system_pods.go:61] "coredns-668d6bf9bc-z4n2r" [ee9fd5dc-3f74-4c37-8e96-c5ef30b99046] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:45:51.848284  213406 system_pods.go:61] "etcd-embed-certs-418468" [4622769e-1912-4b04-84c3-5dea86d25184] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 17:45:51.848301  213406 system_pods.go:61] "kube-apiserver-embed-certs-418468" [266cb804-e782-479b-8dac-132b529e46f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 17:45:51.848319  213406 system_pods.go:61] "kube-controller-manager-embed-certs-418468" [ba3c123b-8919-45cc-96aa-cdd449e77762] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 17:45:51.848328  213406 system_pods.go:61] "kube-proxy-6dft2" [f97366b9-4a39-4659-8e3b-c551085e33d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 17:45:51.848340  213406 system_pods.go:61] "kube-scheduler-embed-certs-418468" [12a8ba4d-1e6d-445c-b170-d36f15352271] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 17:45:51.848350  213406 system_pods.go:61] "metrics-server-f79f97bbb-9vnsg" [95cc235a-e21c-4a97-9334-d5030b9097d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:45:51.848359  213406 system_pods.go:61] "storage-provisioner" [c969e5f7-a7dc-441f-b8eb-2c3af1803f32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 17:45:51.848371  213406 system_pods.go:74] duration metric: took 11.836623ms to wait for pod list to return data ...
	I0414 17:45:51.848386  213406 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:45:51.868743  213406 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:45:51.868781  213406 node_conditions.go:123] node cpu capacity is 2
	I0414 17:45:51.868805  213406 node_conditions.go:105] duration metric: took 20.412892ms to run NodePressure ...
	I0414 17:45:51.868835  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:52.239201  213406 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 17:45:52.242855  213406 kubeadm.go:739] kubelet initialised
	I0414 17:45:52.242878  213406 kubeadm.go:740] duration metric: took 3.647876ms waiting for restarted kubelet to initialise ...
	I0414 17:45:52.242889  213406 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:45:52.245160  213406 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:51.386891  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:53.895571  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:54.645821  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.646257  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.658750  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.659197  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has current primary IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.659235  213635 main.go:141] libmachine: (old-k8s-version-768580) found domain IP: 192.168.72.58
	I0414 17:45:56.659245  213635 main.go:141] libmachine: (old-k8s-version-768580) reserving static IP address...
	I0414 17:45:56.659616  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.659642  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | skip adding static IP to network mk-old-k8s-version-768580 - found existing host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"}
	I0414 17:45:56.659654  213635 main.go:141] libmachine: (old-k8s-version-768580) reserved static IP address 192.168.72.58 for domain old-k8s-version-768580
	I0414 17:45:56.659671  213635 main.go:141] libmachine: (old-k8s-version-768580) waiting for SSH...
	I0414 17:45:56.659708  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Getting to WaitForSSH function...
	I0414 17:45:56.661714  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.662056  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.662087  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.662202  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH client type: external
	I0414 17:45:56.662226  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa (-rw-------)
	I0414 17:45:56.662273  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:45:56.662292  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | About to run SSH command:
	I0414 17:45:56.662309  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | exit 0
	I0414 17:45:56.781680  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | SSH cmd err, output: <nil>: 
	I0414 17:45:56.782109  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:45:56.782751  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:56.785158  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.785469  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.785502  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.785736  213635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:45:56.785961  213635 machine.go:93] provisionDockerMachine start ...
	I0414 17:45:56.785980  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:56.786175  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:56.788189  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.788560  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.788585  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.788720  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:56.788874  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.789008  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.789162  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:56.789316  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:56.789519  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:56.789529  213635 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:45:56.890137  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 17:45:56.890168  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:56.890394  213635 buildroot.go:166] provisioning hostname "old-k8s-version-768580"
	I0414 17:45:56.890418  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:56.890619  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:56.892966  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.893390  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.893410  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.893563  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:56.893750  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.893919  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.894061  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:56.894207  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:56.894529  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:56.894549  213635 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-768580 && echo "old-k8s-version-768580" | sudo tee /etc/hostname
	I0414 17:45:57.008447  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-768580
	
	I0414 17:45:57.008471  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.011111  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.011428  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.011469  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.011584  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.011804  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.011985  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.012096  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.012205  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:57.012392  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:57.012407  213635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-768580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-768580/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-768580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:45:57.132689  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:45:57.132739  213635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:45:57.132763  213635 buildroot.go:174] setting up certificates
	I0414 17:45:57.132773  213635 provision.go:84] configureAuth start
	I0414 17:45:57.132784  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:57.133116  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:57.136014  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.136345  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.136374  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.136550  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.139546  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.140028  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.140059  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.140266  213635 provision.go:143] copyHostCerts
	I0414 17:45:57.140335  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:45:57.140361  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:45:57.140462  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:45:57.140589  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:45:57.140603  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:45:57.140655  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:45:57.140743  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:45:57.140761  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:45:57.140798  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:45:57.140884  213635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-768580 san=[127.0.0.1 192.168.72.58 localhost minikube old-k8s-version-768580]
	I0414 17:45:57.638227  213635 provision.go:177] copyRemoteCerts
	I0414 17:45:57.638317  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:45:57.638348  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.641173  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.641530  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.641563  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.641714  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.641916  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.642092  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.642232  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:57.724240  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:45:57.749634  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 17:45:57.776416  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 17:45:57.801692  213635 provision.go:87] duration metric: took 668.902854ms to configureAuth
	I0414 17:45:57.801722  213635 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:45:57.801958  213635 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:45:57.802054  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.804673  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.805023  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.805051  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.805250  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.805434  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.805597  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.805715  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.805892  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:57.806134  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:57.806153  213635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:45:58.022403  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:45:58.022437  213635 machine.go:96] duration metric: took 1.236460782s to provisionDockerMachine
	I0414 17:45:58.022452  213635 start.go:293] postStartSetup for "old-k8s-version-768580" (driver="kvm2")
	I0414 17:45:58.022466  213635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:45:58.022505  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.022841  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:45:58.022875  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.025802  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.026223  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.026254  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.026507  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.026657  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.026765  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.026909  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.112706  213635 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:45:58.117225  213635 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:45:58.117253  213635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:45:58.117324  213635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:45:58.117416  213635 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:45:58.117503  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:45:58.128036  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:58.152497  213635 start.go:296] duration metric: took 130.019138ms for postStartSetup
	I0414 17:45:58.152538  213635 fix.go:56] duration metric: took 19.889962017s for fixHost
	I0414 17:45:58.152587  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.155565  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.156016  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.156050  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.156233  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.156440  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.156667  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.156863  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.157079  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:58.157365  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:58.157380  213635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:45:58.262578  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652758.231554158
	
	I0414 17:45:58.262603  213635 fix.go:216] guest clock: 1744652758.231554158
	I0414 17:45:58.262612  213635 fix.go:229] Guest: 2025-04-14 17:45:58.231554158 +0000 UTC Remote: 2025-04-14 17:45:58.152542501 +0000 UTC m=+34.908827189 (delta=79.011657ms)
	I0414 17:45:58.262635  213635 fix.go:200] guest clock delta is within tolerance: 79.011657ms
	I0414 17:45:58.262641  213635 start.go:83] releasing machines lock for "old-k8s-version-768580", held for 20.000092548s
	I0414 17:45:58.262660  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.262963  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:58.265585  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.265964  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.266004  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.266157  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266649  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266849  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266978  213635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:45:58.267030  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.267047  213635 ssh_runner.go:195] Run: cat /version.json
	I0414 17:45:58.267073  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.269647  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.269715  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270071  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.270098  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270124  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.270157  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270238  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.270344  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.270424  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.270497  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.270566  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.270678  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.270730  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.270836  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:54.250565  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.250955  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.251402  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.343285  213635 ssh_runner.go:195] Run: systemctl --version
	I0414 17:45:58.367988  213635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:45:58.519539  213635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:45:58.526018  213635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:45:58.526083  213635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:45:58.542624  213635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:45:58.542648  213635 start.go:495] detecting cgroup driver to use...
	I0414 17:45:58.542718  213635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:45:58.558731  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:45:58.572169  213635 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:45:58.572211  213635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:45:58.585163  213635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:45:58.598940  213635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:45:58.721667  213635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:45:58.879281  213635 docker.go:233] disabling docker service ...
	I0414 17:45:58.879343  213635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:45:58.896126  213635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:45:58.908836  213635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:45:59.033428  213635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:45:59.166628  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:45:59.181684  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:45:59.200617  213635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 17:45:59.200680  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.211541  213635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:45:59.211600  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.223657  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.235487  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.248000  213635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:45:59.261365  213635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:45:59.273037  213635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:45:59.273132  213635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:45:59.288901  213635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:45:59.300042  213635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:59.423635  213635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:45:59.529685  213635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:45:59.529758  213635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:45:59.534592  213635 start.go:563] Will wait 60s for crictl version
	I0414 17:45:59.534640  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:45:59.538651  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:45:59.578522  213635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:45:59.578595  213635 ssh_runner.go:195] Run: crio --version
	I0414 17:45:59.605740  213635 ssh_runner.go:195] Run: crio --version
	I0414 17:45:59.635045  213635 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 17:45:56.385712  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.386662  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:00.388088  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.647473  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:01.146666  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:59.636069  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:59.638462  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:59.638803  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:59.638829  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:59.639064  213635 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 17:45:59.643370  213635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:59.657222  213635 kubeadm.go:883] updating cluster {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:45:59.657362  213635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:45:59.657409  213635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:59.704172  213635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:45:59.704247  213635 ssh_runner.go:195] Run: which lz4
	I0414 17:45:59.708554  213635 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:45:59.712850  213635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:45:59.712882  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 17:46:01.354039  213635 crio.go:462] duration metric: took 1.645520081s to copy over tarball
	I0414 17:46:01.354112  213635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:45:59.252026  213406 pod_ready.go:93] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"True"
	I0414 17:45:59.252050  213406 pod_ready.go:82] duration metric: took 7.006866592s for pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.252074  213406 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.255615  213406 pod_ready.go:93] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:45:59.255638  213406 pod_ready.go:82] duration metric: took 3.555461ms for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.255649  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:01.263173  213406 pod_ready.go:103] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:02.887635  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:05.387807  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:03.646378  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:05.647729  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:08.146880  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:04.261653  213635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.907516994s)
	I0414 17:46:04.261683  213635 crio.go:469] duration metric: took 2.907610683s to extract the tarball
	I0414 17:46:04.261695  213635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:46:04.307964  213635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:46:04.345077  213635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:46:04.345112  213635 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 17:46:04.345199  213635 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:04.345203  213635 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.345239  213635 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.345249  213635 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 17:46:04.345318  213635 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.345321  213635 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.345209  213635 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.345436  213635 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.347103  213635 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.347115  213635 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.347128  213635 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.347132  213635 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.347093  213635 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.347109  213635 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.347093  213635 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 17:46:04.347164  213635 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:04.489472  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.490905  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.494468  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.498887  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.499207  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.503007  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.528129  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 17:46:04.591926  213635 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 17:46:04.591983  213635 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.592033  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.628524  213635 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 17:46:04.628568  213635 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.628604  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691347  213635 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 17:46:04.691455  213635 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.691347  213635 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 17:46:04.691571  213635 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.691392  213635 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 17:46:04.691634  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691661  213635 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.691393  213635 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 17:46:04.691706  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691731  213635 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.691759  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691509  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.696665  213635 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 17:46:04.696697  213635 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 17:46:04.696714  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.696727  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.696730  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.707222  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.707277  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.709851  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.710042  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.834502  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:04.834653  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.834668  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.856960  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.857034  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.857094  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.857179  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.983051  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.983060  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.983060  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:05.024632  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:05.024779  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:05.031272  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:05.031399  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:05.161869  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 17:46:05.170557  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 17:46:05.170702  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:05.195041  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 17:46:05.195041  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 17:46:05.208270  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 17:46:05.208341  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 17:46:05.220290  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 17:46:05.331240  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:05.471903  213635 cache_images.go:92] duration metric: took 1.126766183s to LoadCachedImages
	W0414 17:46:05.471974  213635 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0414 17:46:05.471985  213635 kubeadm.go:934] updating node { 192.168.72.58 8443 v1.20.0 crio true true} ...
	I0414 17:46:05.472082  213635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-768580 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:46:05.472172  213635 ssh_runner.go:195] Run: crio config
	I0414 17:46:05.531642  213635 cni.go:84] Creating CNI manager for ""
	I0414 17:46:05.531667  213635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:46:05.531678  213635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:46:05.531697  213635 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.58 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-768580 NodeName:old-k8s-version-768580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 17:46:05.531815  213635 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-768580"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:46:05.531897  213635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 17:46:05.542769  213635 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:46:05.542861  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:46:05.552930  213635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0414 17:46:05.570087  213635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:46:05.588483  213635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 17:46:05.606443  213635 ssh_runner.go:195] Run: grep 192.168.72.58	control-plane.minikube.internal$ /etc/hosts
	I0414 17:46:05.610756  213635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:46:05.622873  213635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:46:05.770402  213635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:46:05.789353  213635 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580 for IP: 192.168.72.58
	I0414 17:46:05.789374  213635 certs.go:194] generating shared ca certs ...
	I0414 17:46:05.789395  213635 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:46:05.789542  213635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:46:05.789598  213635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:46:05.789613  213635 certs.go:256] generating profile certs ...
	I0414 17:46:05.789717  213635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.key
	I0414 17:46:05.789816  213635 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key.0f5f550a
	I0414 17:46:05.789911  213635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key
	I0414 17:46:05.790030  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:46:05.790067  213635 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:46:05.790077  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:46:05.790130  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:46:05.790163  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:46:05.790195  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:46:05.790251  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:46:05.790829  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:46:05.852348  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:46:05.879909  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:46:05.924274  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:46:05.968318  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 17:46:06.004046  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:46:06.039672  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:46:06.068041  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 17:46:06.093159  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:46:06.118949  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:46:06.144480  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:46:06.171159  213635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:46:06.189499  213635 ssh_runner.go:195] Run: openssl version
	I0414 17:46:06.196060  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:46:06.206864  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.211352  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.211407  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.217759  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:46:06.228546  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:46:06.239146  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.243457  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.243511  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.249141  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:46:06.259582  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:46:06.269988  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.275271  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.275324  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.282428  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:46:06.293404  213635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:46:06.298115  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:46:06.304513  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:46:06.310675  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:46:06.317218  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:46:06.324114  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:46:06.331759  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:46:06.337898  213635 kubeadm.go:392] StartCluster: {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:46:06.337991  213635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:46:06.338037  213635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:46:06.381282  213635 cri.go:89] found id: ""
	I0414 17:46:06.381351  213635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:46:06.392326  213635 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 17:46:06.392345  213635 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 17:46:06.392385  213635 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 17:46:06.402275  213635 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:46:06.403224  213635 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-768580" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:46:06.403594  213635 kubeconfig.go:62] /home/jenkins/minikube-integration/20349-149500/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-768580" cluster setting kubeconfig missing "old-k8s-version-768580" context setting]
	I0414 17:46:06.404086  213635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:46:06.460048  213635 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 17:46:06.470500  213635 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.58
	I0414 17:46:06.470535  213635 kubeadm.go:1160] stopping kube-system containers ...
	I0414 17:46:06.470546  213635 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 17:46:06.470624  213635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:46:06.509152  213635 cri.go:89] found id: ""
	I0414 17:46:06.509210  213635 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:46:06.526163  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:46:06.535901  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:46:06.535928  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:46:06.535978  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:46:06.545480  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:46:06.545535  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:46:06.554610  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:46:06.563294  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:46:06.563347  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:46:06.572284  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:46:06.581431  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:46:06.581475  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:46:06.591211  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:46:06.600340  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:46:06.600408  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:46:06.609494  213635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:46:06.618800  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:06.747191  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.478890  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.697670  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.793179  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.893891  213635 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:46:07.893971  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:03.762310  213406 pod_ready.go:103] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:04.762763  213406 pod_ready.go:93] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.762794  213406 pod_ready.go:82] duration metric: took 5.507135949s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.762808  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.767311  213406 pod_ready.go:93] pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.767329  213406 pod_ready.go:82] duration metric: took 4.514084ms for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.767337  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6dft2" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.771924  213406 pod_ready.go:93] pod "kube-proxy-6dft2" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.771944  213406 pod_ready.go:82] duration metric: took 4.599852ms for pod "kube-proxy-6dft2" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.771954  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.776235  213406 pod_ready.go:93] pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.776251  213406 pod_ready.go:82] duration metric: took 4.290311ms for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.776264  213406 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:06.782241  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:07.388743  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:09.886293  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:10.645757  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:12.646190  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:08.394410  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:08.895002  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.395022  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:10.394996  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:10.894824  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:11.394638  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:11.894428  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:12.394452  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:12.894017  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.281824  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:11.282179  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:11.886469  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:13.886515  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:15.146498  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:17.147156  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:13.394405  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:13.894519  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:14.394847  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:14.894997  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:15.394630  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:15.895007  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:16.394831  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:16.894632  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:17.395016  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:17.894993  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:13.783938  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:16.282525  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:16.387995  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:18.887504  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:19.645731  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:21.645945  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:18.394976  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:18.895068  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:19.394434  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:19.894886  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:20.395037  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:20.895061  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:21.394429  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:21.894500  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:22.394822  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:22.895080  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:18.782119  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:20.785464  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.281701  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:21.387824  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.886390  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:24.145922  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:26.645858  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.394953  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:23.894339  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:24.395018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:24.895037  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.394854  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.894984  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:26.395005  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:26.895007  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:27.395035  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:27.895034  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.282520  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:27.780903  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:26.386775  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.886919  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.646216  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:30.646635  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.146515  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.394580  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:28.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.394479  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.894485  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:30.394483  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:30.894471  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:31.395020  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:31.895014  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:32.395034  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:32.895028  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.782338  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:32.280971  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:31.389561  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.885891  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:35.646041  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.146195  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.394018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:33.894501  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.394226  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.894064  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:35.394952  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:35.895016  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:36.394607  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:36.895006  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:37.394673  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:37.894995  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.282968  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:36.781804  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:35.886870  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.385985  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:40.386210  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:40.646578  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.146373  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.394272  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:38.894875  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:39.394148  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:39.895036  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:40.394685  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:40.895010  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:41.394981  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:41.894634  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:42.394270  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:42.895029  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:38.783097  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:41.281604  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.281689  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:42.387307  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:44.885815  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:45.646331  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.146832  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.394362  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:43.894756  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:44.395057  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:44.895022  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.394470  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.894701  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:46.395033  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:46.895033  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:47.394321  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:47.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.781213  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:47.782055  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:46.886132  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.887731  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:50.646089  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:52.646393  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.394554  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:48.894703  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.394432  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.894498  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:50.395063  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:50.894449  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:51.395000  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:51.895026  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:52.394891  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:52.894471  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.782883  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:52.282500  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:51.386370  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:53.387056  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:55.387096  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:55.145864  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:57.145973  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:53.394778  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:53.894664  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.394089  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.894622  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:55.394495  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:55.894999  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:56.395001  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:56.894095  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:57.394283  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:57.894977  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.282957  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:56.781374  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:57.887077  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:00.386841  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:59.146801  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:01.645801  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:58.394681  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:58.895019  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:59.394738  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:59.894984  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:00.394802  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:00.894854  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:01.395049  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:01.895019  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:02.394977  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:02.894501  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:58.782051  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:00.782255  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:02.782525  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:02.886126  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:04.886471  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:03.646142  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:06.146967  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:03.394365  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:03.895039  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:04.395027  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:04.894987  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:05.394716  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:05.894080  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:06.394955  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:06.894670  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:07.394902  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:07.894929  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:07.895008  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:07.936773  213635 cri.go:89] found id: ""
	I0414 17:47:07.936809  213635 logs.go:282] 0 containers: []
	W0414 17:47:07.936822  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:07.936830  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:07.936908  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:07.971073  213635 cri.go:89] found id: ""
	I0414 17:47:07.971104  213635 logs.go:282] 0 containers: []
	W0414 17:47:07.971113  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:07.971118  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:07.971171  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:08.010389  213635 cri.go:89] found id: ""
	I0414 17:47:08.010414  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.010422  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:08.010427  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:08.010482  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:08.044286  213635 cri.go:89] found id: ""
	I0414 17:47:08.044322  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.044334  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:08.044344  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:08.044413  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:08.079985  213635 cri.go:89] found id: ""
	I0414 17:47:08.080008  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.080016  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:08.080021  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:08.080071  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:08.119431  213635 cri.go:89] found id: ""
	I0414 17:47:08.119456  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.119468  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:08.119474  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:08.119529  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:08.152203  213635 cri.go:89] found id: ""
	I0414 17:47:08.152227  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.152234  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:08.152239  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:08.152287  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:08.187035  213635 cri.go:89] found id: ""
	I0414 17:47:08.187064  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.187075  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:08.187092  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:08.187106  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0414 17:47:05.283544  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:07.781984  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:06.887145  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:09.386391  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:08.645957  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:10.646258  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:13.147462  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	W0414 17:47:08.312274  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:08.312301  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:08.312315  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:08.382714  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:08.382745  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:08.421561  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:08.421588  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:08.476855  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:08.476891  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:10.991104  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:11.004501  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:11.004575  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:11.039060  213635 cri.go:89] found id: ""
	I0414 17:47:11.039086  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.039094  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:11.039099  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:11.039145  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:11.073857  213635 cri.go:89] found id: ""
	I0414 17:47:11.073883  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.073890  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:11.073896  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:11.073942  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:11.106411  213635 cri.go:89] found id: ""
	I0414 17:47:11.106436  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.106493  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:11.106505  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:11.106550  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:11.145377  213635 cri.go:89] found id: ""
	I0414 17:47:11.145406  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.145416  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:11.145423  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:11.145481  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:11.178621  213635 cri.go:89] found id: ""
	I0414 17:47:11.178650  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.178661  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:11.178668  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:11.178731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:11.212798  213635 cri.go:89] found id: ""
	I0414 17:47:11.212832  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.212840  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:11.212846  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:11.212902  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:11.258553  213635 cri.go:89] found id: ""
	I0414 17:47:11.258576  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.258584  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:11.258589  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:11.258637  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:11.318616  213635 cri.go:89] found id: ""
	I0414 17:47:11.318658  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.318669  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:11.318680  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:11.318695  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:11.381468  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:11.381500  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:11.395975  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:11.395999  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:11.468932  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:11.468954  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:11.468971  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:11.547431  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:11.547464  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:10.281538  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:12.284013  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:11.386803  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:13.387771  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:15.645939  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:17.647578  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:14.089096  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:14.105644  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:14.105710  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:14.139763  213635 cri.go:89] found id: ""
	I0414 17:47:14.139791  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.139798  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:14.139804  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:14.139866  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:14.174571  213635 cri.go:89] found id: ""
	I0414 17:47:14.174594  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.174600  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:14.174605  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:14.174659  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:14.208140  213635 cri.go:89] found id: ""
	I0414 17:47:14.208164  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.208171  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:14.208177  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:14.208233  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:14.240906  213635 cri.go:89] found id: ""
	I0414 17:47:14.240940  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.240952  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:14.240959  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:14.241023  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:14.273549  213635 cri.go:89] found id: ""
	I0414 17:47:14.273581  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.273593  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:14.273599  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:14.273652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:14.308758  213635 cri.go:89] found id: ""
	I0414 17:47:14.308791  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.308798  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:14.308805  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:14.308868  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:14.343464  213635 cri.go:89] found id: ""
	I0414 17:47:14.343492  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.343503  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:14.343510  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:14.343571  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:14.377456  213635 cri.go:89] found id: ""
	I0414 17:47:14.377483  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.377493  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:14.377503  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:14.377517  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:14.428031  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:14.428059  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:14.441682  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:14.441706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:14.511433  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:14.511456  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:14.511470  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:14.591334  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:14.591373  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:17.131067  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:17.150199  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:17.150257  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:17.195868  213635 cri.go:89] found id: ""
	I0414 17:47:17.195895  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.195902  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:17.195909  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:17.195968  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:17.248530  213635 cri.go:89] found id: ""
	I0414 17:47:17.248562  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.248573  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:17.248600  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:17.248664  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:17.302561  213635 cri.go:89] found id: ""
	I0414 17:47:17.302592  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.302603  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:17.302611  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:17.302676  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:17.337154  213635 cri.go:89] found id: ""
	I0414 17:47:17.337185  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.337196  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:17.337204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:17.337262  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:17.372117  213635 cri.go:89] found id: ""
	I0414 17:47:17.372142  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.372149  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:17.372154  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:17.372209  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:17.409162  213635 cri.go:89] found id: ""
	I0414 17:47:17.409190  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.409199  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:17.409204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:17.409253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:17.444609  213635 cri.go:89] found id: ""
	I0414 17:47:17.444636  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.444652  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:17.444660  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:17.444721  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:17.484188  213635 cri.go:89] found id: ""
	I0414 17:47:17.484216  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.484226  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:17.484238  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:17.484252  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:17.523203  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:17.523249  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:17.573785  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:17.573818  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:17.586989  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:17.587014  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:17.659369  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:17.659392  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:17.659408  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:14.781454  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:16.782152  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:15.888032  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:18.387319  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.147048  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:22.646239  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.241973  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:20.255211  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:20.255288  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:20.292821  213635 cri.go:89] found id: ""
	I0414 17:47:20.292854  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.292866  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:20.292873  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:20.292933  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:20.331101  213635 cri.go:89] found id: ""
	I0414 17:47:20.331150  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.331162  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:20.331169  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:20.331247  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:20.369990  213635 cri.go:89] found id: ""
	I0414 17:47:20.370015  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.370022  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:20.370027  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:20.370096  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:20.406805  213635 cri.go:89] found id: ""
	I0414 17:47:20.406836  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.406846  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:20.406852  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:20.406913  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:20.442314  213635 cri.go:89] found id: ""
	I0414 17:47:20.442340  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.442348  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:20.442353  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:20.442413  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:20.476588  213635 cri.go:89] found id: ""
	I0414 17:47:20.476617  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.476627  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:20.476634  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:20.476695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:20.510731  213635 cri.go:89] found id: ""
	I0414 17:47:20.510782  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.510821  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:20.510833  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:20.510906  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:20.545219  213635 cri.go:89] found id: ""
	I0414 17:47:20.545244  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.545255  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:20.545277  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:20.545292  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:20.583147  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:20.583180  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:20.636347  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:20.636382  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:20.650452  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:20.650477  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:20.722784  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:20.722811  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:20.722828  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:19.282759  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:21.782197  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.886279  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:22.886745  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:24.886852  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:25.145867  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:27.146656  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:23.298966  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:23.312159  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:23.312251  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:23.353883  213635 cri.go:89] found id: ""
	I0414 17:47:23.353907  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.353915  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:23.353921  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:23.354005  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:23.391644  213635 cri.go:89] found id: ""
	I0414 17:47:23.391671  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.391680  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:23.391688  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:23.391732  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:23.427612  213635 cri.go:89] found id: ""
	I0414 17:47:23.427644  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.427652  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:23.427658  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:23.427719  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:23.463296  213635 cri.go:89] found id: ""
	I0414 17:47:23.463324  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.463335  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:23.463344  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:23.463408  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:23.497377  213635 cri.go:89] found id: ""
	I0414 17:47:23.497407  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.497418  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:23.497426  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:23.497487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:23.534162  213635 cri.go:89] found id: ""
	I0414 17:47:23.534209  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.534222  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:23.534229  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:23.534299  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:23.574494  213635 cri.go:89] found id: ""
	I0414 17:47:23.574524  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.574535  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:23.574542  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:23.574611  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:23.612210  213635 cri.go:89] found id: ""
	I0414 17:47:23.612265  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.612279  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:23.612289  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:23.612304  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:23.689765  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:23.689802  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:23.725675  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:23.725709  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:23.778002  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:23.778031  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:23.793019  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:23.793052  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:23.866451  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:26.367039  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:26.381917  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:26.381987  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:26.416638  213635 cri.go:89] found id: ""
	I0414 17:47:26.416661  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.416668  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:26.416674  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:26.416721  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:26.458324  213635 cri.go:89] found id: ""
	I0414 17:47:26.458349  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.458360  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:26.458367  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:26.458423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:26.493044  213635 cri.go:89] found id: ""
	I0414 17:47:26.493096  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.493109  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:26.493116  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:26.493181  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:26.527654  213635 cri.go:89] found id: ""
	I0414 17:47:26.527690  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.527702  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:26.527709  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:26.527769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:26.565607  213635 cri.go:89] found id: ""
	I0414 17:47:26.565633  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.565639  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:26.565645  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:26.565692  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:26.598157  213635 cri.go:89] found id: ""
	I0414 17:47:26.598186  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.598196  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:26.598204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:26.598264  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:26.631534  213635 cri.go:89] found id: ""
	I0414 17:47:26.631572  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.631581  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:26.631586  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:26.631652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:26.669109  213635 cri.go:89] found id: ""
	I0414 17:47:26.669134  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.669145  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:26.669155  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:26.669169  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:26.722048  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:26.722075  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:26.735141  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:26.735160  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:26.808950  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:26.808979  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:26.808996  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:26.896662  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:26.896693  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:23.785953  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:26.284260  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:27.386201  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.386726  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.146828  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.646619  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.440079  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:29.454761  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:29.454837  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:29.488451  213635 cri.go:89] found id: ""
	I0414 17:47:29.488480  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.488491  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:29.488499  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:29.488548  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:29.520861  213635 cri.go:89] found id: ""
	I0414 17:47:29.520891  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.520902  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:29.520908  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:29.520963  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:29.557913  213635 cri.go:89] found id: ""
	I0414 17:47:29.557939  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.557949  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:29.557956  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:29.558013  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:29.596839  213635 cri.go:89] found id: ""
	I0414 17:47:29.596878  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.596889  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:29.596896  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:29.596959  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:29.631746  213635 cri.go:89] found id: ""
	I0414 17:47:29.631779  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.631789  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:29.631797  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:29.631864  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:29.667006  213635 cri.go:89] found id: ""
	I0414 17:47:29.667034  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.667048  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:29.667055  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:29.667111  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:29.700458  213635 cri.go:89] found id: ""
	I0414 17:47:29.700490  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.700500  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:29.700507  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:29.700569  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:29.736776  213635 cri.go:89] found id: ""
	I0414 17:47:29.736804  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.736814  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:29.736825  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:29.736840  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:29.776831  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:29.776871  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:29.830601  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:29.830632  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:29.844366  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:29.844396  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:29.920571  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:29.920595  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:29.920611  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:32.502415  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:32.516740  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:32.516806  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:32.551360  213635 cri.go:89] found id: ""
	I0414 17:47:32.551380  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.551387  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:32.551393  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:32.551440  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:32.588757  213635 cri.go:89] found id: ""
	I0414 17:47:32.588785  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.588795  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:32.588802  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:32.588869  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:32.622369  213635 cri.go:89] found id: ""
	I0414 17:47:32.622394  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.622405  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:32.622413  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:32.622473  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:32.658310  213635 cri.go:89] found id: ""
	I0414 17:47:32.658334  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.658343  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:32.658350  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:32.658408  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:32.692724  213635 cri.go:89] found id: ""
	I0414 17:47:32.692756  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.692768  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:32.692776  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:32.692836  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:32.729086  213635 cri.go:89] found id: ""
	I0414 17:47:32.729113  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.729121  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:32.729127  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:32.729182  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:32.761853  213635 cri.go:89] found id: ""
	I0414 17:47:32.761878  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.761886  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:32.761891  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:32.761937  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:32.794906  213635 cri.go:89] found id: ""
	I0414 17:47:32.794931  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.794938  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:32.794945  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:32.794956  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:32.876985  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:32.877027  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:32.915184  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:32.915210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:32.965418  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:32.965449  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:32.978245  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:32.978270  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:33.046592  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:28.782031  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.281960  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:33.283783  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.885919  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:34.385966  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:34.146066  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:36.645902  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:35.547721  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:35.562729  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:35.562794  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:35.600323  213635 cri.go:89] found id: ""
	I0414 17:47:35.600353  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.600365  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:35.600374  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:35.600426  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:35.639091  213635 cri.go:89] found id: ""
	I0414 17:47:35.639116  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.639124  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:35.639130  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:35.639185  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:35.674709  213635 cri.go:89] found id: ""
	I0414 17:47:35.674743  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.674755  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:35.674763  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:35.674825  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:35.712316  213635 cri.go:89] found id: ""
	I0414 17:47:35.712340  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.712347  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:35.712353  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:35.712399  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:35.746497  213635 cri.go:89] found id: ""
	I0414 17:47:35.746525  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.746535  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:35.746542  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:35.746611  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:35.787414  213635 cri.go:89] found id: ""
	I0414 17:47:35.787436  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.787445  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:35.787460  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:35.787514  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:35.818830  213635 cri.go:89] found id: ""
	I0414 17:47:35.818857  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.818867  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:35.818874  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:35.818938  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:35.854020  213635 cri.go:89] found id: ""
	I0414 17:47:35.854048  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.854059  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:35.854082  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:35.854095  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:35.907502  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:35.907530  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:35.922223  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:35.922248  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:35.992058  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:35.992085  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:35.992101  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:36.070377  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:36.070413  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:35.782944  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.283160  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:36.388560  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.886997  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.647280  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:41.146882  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.612483  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:38.625570  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:38.625639  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:38.664060  213635 cri.go:89] found id: ""
	I0414 17:47:38.664084  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.664104  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:38.664112  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:38.664168  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:38.698505  213635 cri.go:89] found id: ""
	I0414 17:47:38.698535  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.698546  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:38.698553  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:38.698614  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:38.735113  213635 cri.go:89] found id: ""
	I0414 17:47:38.735142  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.735153  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:38.735161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:38.735229  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:38.773173  213635 cri.go:89] found id: ""
	I0414 17:47:38.773203  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.773211  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:38.773216  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:38.773270  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:38.807136  213635 cri.go:89] found id: ""
	I0414 17:47:38.807167  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.807178  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:38.807186  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:38.807244  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:38.844350  213635 cri.go:89] found id: ""
	I0414 17:47:38.844375  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.844384  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:38.844392  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:38.844445  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:38.879565  213635 cri.go:89] found id: ""
	I0414 17:47:38.879587  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.879594  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:38.879599  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:38.879658  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:38.916412  213635 cri.go:89] found id: ""
	I0414 17:47:38.916449  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.916457  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:38.916465  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:38.916475  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:38.953944  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:38.953972  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:39.004989  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:39.005019  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:39.018618  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:39.018640  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:39.091095  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:39.091122  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:39.091136  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:41.675012  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:41.689023  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:41.689085  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:41.722675  213635 cri.go:89] found id: ""
	I0414 17:47:41.722698  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.722707  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:41.722715  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:41.722774  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:41.757787  213635 cri.go:89] found id: ""
	I0414 17:47:41.757808  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.757815  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:41.757822  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:41.757895  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:41.792938  213635 cri.go:89] found id: ""
	I0414 17:47:41.792970  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.792981  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:41.792990  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:41.793060  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:41.826121  213635 cri.go:89] found id: ""
	I0414 17:47:41.826145  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.826153  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:41.826158  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:41.826206  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:41.862687  213635 cri.go:89] found id: ""
	I0414 17:47:41.862717  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.862728  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:41.862735  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:41.862810  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:41.901905  213635 cri.go:89] found id: ""
	I0414 17:47:41.901935  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.901945  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:41.901953  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:41.902010  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:41.936560  213635 cri.go:89] found id: ""
	I0414 17:47:41.936591  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.936602  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:41.936609  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:41.936673  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:41.968609  213635 cri.go:89] found id: ""
	I0414 17:47:41.968640  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.968651  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:41.968663  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:41.968677  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:42.037691  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:42.037725  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:42.037742  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:42.123173  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:42.123222  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:42.164982  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:42.165018  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:42.217567  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:42.217601  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:40.283210  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:42.286058  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:40.887506  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:43.387362  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:43.646155  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:46.145968  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:48.147182  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:44.733645  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:44.748083  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:44.748144  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:44.782103  213635 cri.go:89] found id: ""
	I0414 17:47:44.782131  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.782141  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:44.782148  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:44.782200  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:44.825594  213635 cri.go:89] found id: ""
	I0414 17:47:44.825640  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.825652  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:44.825659  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:44.825719  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:44.858967  213635 cri.go:89] found id: ""
	I0414 17:47:44.859000  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.859017  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:44.859024  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:44.859088  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:44.892965  213635 cri.go:89] found id: ""
	I0414 17:47:44.892990  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.892999  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:44.893007  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:44.893073  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:44.926983  213635 cri.go:89] found id: ""
	I0414 17:47:44.927007  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.927014  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:44.927019  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:44.927066  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:44.961406  213635 cri.go:89] found id: ""
	I0414 17:47:44.961459  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.961471  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:44.961478  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:44.961540  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:44.996262  213635 cri.go:89] found id: ""
	I0414 17:47:44.996287  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.996296  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:44.996304  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:44.996368  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:45.029476  213635 cri.go:89] found id: ""
	I0414 17:47:45.029507  213635 logs.go:282] 0 containers: []
	W0414 17:47:45.029518  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:45.029529  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:45.029543  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:45.100081  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:45.100110  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:45.100122  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:45.179286  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:45.179319  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:45.220129  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:45.220166  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:45.275257  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:45.275292  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:47.792170  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:47.805709  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:47.805769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:47.842023  213635 cri.go:89] found id: ""
	I0414 17:47:47.842050  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.842058  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:47.842063  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:47.842118  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:47.884228  213635 cri.go:89] found id: ""
	I0414 17:47:47.884260  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.884271  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:47.884278  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:47.884338  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:47.924093  213635 cri.go:89] found id: ""
	I0414 17:47:47.924121  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.924130  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:47.924137  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:47.924193  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:47.965378  213635 cri.go:89] found id: ""
	I0414 17:47:47.965406  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.965416  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:47.965423  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:47.965538  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:48.003136  213635 cri.go:89] found id: ""
	I0414 17:47:48.003165  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.003178  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:48.003187  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:48.003253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:48.042729  213635 cri.go:89] found id: ""
	I0414 17:47:48.042758  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.042768  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:48.042774  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:48.042837  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:48.077654  213635 cri.go:89] found id: ""
	I0414 17:47:48.077682  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.077692  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:48.077699  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:48.077749  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:48.109967  213635 cri.go:89] found id: ""
	I0414 17:47:48.109991  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.109998  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:48.110006  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:48.110017  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:48.125245  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:48.125277  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:48.194705  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:48.194725  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:48.194738  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:44.783825  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:47.283708  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:45.886120  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:47.886616  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:50.387382  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:50.646377  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:53.145406  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:48.287160  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:48.287196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:48.335515  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:48.335547  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:50.892108  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:50.905172  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:50.905234  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:50.940079  213635 cri.go:89] found id: ""
	I0414 17:47:50.940104  213635 logs.go:282] 0 containers: []
	W0414 17:47:50.940111  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:50.940116  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:50.940176  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:50.973887  213635 cri.go:89] found id: ""
	I0414 17:47:50.973912  213635 logs.go:282] 0 containers: []
	W0414 17:47:50.973919  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:50.973926  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:50.973982  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:51.012547  213635 cri.go:89] found id: ""
	I0414 17:47:51.012568  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.012577  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:51.012584  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:51.012640  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:51.053157  213635 cri.go:89] found id: ""
	I0414 17:47:51.053180  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.053188  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:51.053196  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:51.053249  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:51.110289  213635 cri.go:89] found id: ""
	I0414 17:47:51.110319  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.110330  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:51.110337  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:51.110393  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:51.144361  213635 cri.go:89] found id: ""
	I0414 17:47:51.144383  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.144394  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:51.144402  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:51.144530  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:51.177527  213635 cri.go:89] found id: ""
	I0414 17:47:51.177563  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.177571  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:51.177576  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:51.177636  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:51.210869  213635 cri.go:89] found id: ""
	I0414 17:47:51.210891  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.210899  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:51.210907  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:51.210918  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:51.247291  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:51.247317  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:51.299677  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:51.299706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:51.313384  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:51.313409  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:51.388212  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:51.388239  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:51.388254  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:49.781341  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:51.782513  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:52.886676  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:55.386338  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:55.145724  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:57.146515  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:53.976114  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:53.989051  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:53.989115  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:54.023756  213635 cri.go:89] found id: ""
	I0414 17:47:54.023788  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.023799  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:54.023805  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:54.023869  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:54.061807  213635 cri.go:89] found id: ""
	I0414 17:47:54.061853  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.061865  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:54.061872  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:54.061930  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:54.095835  213635 cri.go:89] found id: ""
	I0414 17:47:54.095878  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.095890  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:54.095897  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:54.096006  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:54.131513  213635 cri.go:89] found id: ""
	I0414 17:47:54.131535  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.131543  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:54.131548  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:54.131594  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:54.171002  213635 cri.go:89] found id: ""
	I0414 17:47:54.171024  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.171031  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:54.171037  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:54.171095  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:54.206779  213635 cri.go:89] found id: ""
	I0414 17:47:54.206801  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.206808  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:54.206818  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:54.206876  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:54.252485  213635 cri.go:89] found id: ""
	I0414 17:47:54.252533  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.252547  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:54.252555  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:54.252628  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:54.290628  213635 cri.go:89] found id: ""
	I0414 17:47:54.290656  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.290667  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:54.290676  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:54.290689  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:54.364000  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:54.364020  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:54.364032  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:54.446117  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:54.446152  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:54.488749  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:54.488775  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:54.540890  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:54.540922  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:57.055546  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:57.069362  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:57.069420  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:57.112914  213635 cri.go:89] found id: ""
	I0414 17:47:57.112942  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.112949  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:57.112955  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:57.113002  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:57.149533  213635 cri.go:89] found id: ""
	I0414 17:47:57.149553  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.149560  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:57.149565  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:57.149622  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:57.184595  213635 cri.go:89] found id: ""
	I0414 17:47:57.184624  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.184632  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:57.184637  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:57.184683  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:57.219904  213635 cri.go:89] found id: ""
	I0414 17:47:57.219931  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.219942  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:57.219949  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:57.220008  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:57.255709  213635 cri.go:89] found id: ""
	I0414 17:47:57.255736  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.255745  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:57.255750  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:57.255809  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:57.289390  213635 cri.go:89] found id: ""
	I0414 17:47:57.289413  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.289419  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:57.289425  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:57.289474  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:57.329950  213635 cri.go:89] found id: ""
	I0414 17:47:57.329972  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.329978  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:57.329983  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:57.330028  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:57.365856  213635 cri.go:89] found id: ""
	I0414 17:47:57.365888  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.365901  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:57.365911  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:57.365925  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:57.378637  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:57.378661  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:57.446639  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:57.446662  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:57.446676  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:57.536049  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:57.536086  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:57.585473  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:57.585506  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:53.782957  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:56.286401  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:57.387720  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:59.886486  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:59.647389  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:02.147002  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:00.135711  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:00.151060  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:00.151131  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:00.184972  213635 cri.go:89] found id: ""
	I0414 17:48:00.185005  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.185016  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:00.185023  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:00.185088  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:00.218051  213635 cri.go:89] found id: ""
	I0414 17:48:00.218085  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.218093  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:00.218099  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:00.218156  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:00.251291  213635 cri.go:89] found id: ""
	I0414 17:48:00.251318  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.251325  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:00.251331  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:00.251392  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:00.291683  213635 cri.go:89] found id: ""
	I0414 17:48:00.291706  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.291713  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:00.291718  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:00.291765  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:00.329316  213635 cri.go:89] found id: ""
	I0414 17:48:00.329342  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.329350  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:00.329356  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:00.329409  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:00.364819  213635 cri.go:89] found id: ""
	I0414 17:48:00.364848  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.364856  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:00.364861  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:00.364905  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:00.404928  213635 cri.go:89] found id: ""
	I0414 17:48:00.404961  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.404971  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:00.404978  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:00.405040  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:00.439708  213635 cri.go:89] found id: ""
	I0414 17:48:00.439739  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.439750  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:00.439761  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:00.439776  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:00.479252  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:00.479285  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:00.533545  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:00.533576  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:00.546920  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:00.546952  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:00.614440  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:00.614461  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:00.614476  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:03.197930  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:03.212912  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:03.212973  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:03.272435  213635 cri.go:89] found id: ""
	I0414 17:48:03.272467  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.272479  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:03.272487  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:03.272554  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:58.781206  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:00.781677  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.286395  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:01.886559  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.887796  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:04.147694  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:06.647249  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.336351  213635 cri.go:89] found id: ""
	I0414 17:48:03.336373  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.336380  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:03.336386  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:03.336430  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:03.370368  213635 cri.go:89] found id: ""
	I0414 17:48:03.370398  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.370408  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:03.370422  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:03.370475  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:03.408402  213635 cri.go:89] found id: ""
	I0414 17:48:03.408429  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.408436  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:03.408442  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:03.408491  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:03.442912  213635 cri.go:89] found id: ""
	I0414 17:48:03.442939  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.442950  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:03.442957  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:03.443019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:03.479439  213635 cri.go:89] found id: ""
	I0414 17:48:03.479467  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.479476  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:03.479481  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:03.479544  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:03.517971  213635 cri.go:89] found id: ""
	I0414 17:48:03.517993  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.518000  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:03.518005  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:03.518059  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:03.556177  213635 cri.go:89] found id: ""
	I0414 17:48:03.556208  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.556216  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:03.556224  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:03.556237  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:03.594142  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:03.594167  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:03.644688  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:03.644718  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:03.658140  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:03.658164  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:03.729627  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:03.729649  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:03.729663  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:06.309939  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:06.323927  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:06.323990  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:06.364388  213635 cri.go:89] found id: ""
	I0414 17:48:06.364412  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.364426  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:06.364431  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:06.364477  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:06.398800  213635 cri.go:89] found id: ""
	I0414 17:48:06.398821  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.398828  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:06.398833  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:06.398885  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:06.442842  213635 cri.go:89] found id: ""
	I0414 17:48:06.442873  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.442884  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:06.442891  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:06.442973  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:06.485910  213635 cri.go:89] found id: ""
	I0414 17:48:06.485945  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.485955  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:06.485962  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:06.486023  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:06.520624  213635 cri.go:89] found id: ""
	I0414 17:48:06.520656  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.520668  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:06.520675  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:06.520741  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:06.555790  213635 cri.go:89] found id: ""
	I0414 17:48:06.555833  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.555845  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:06.555853  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:06.555916  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:06.589144  213635 cri.go:89] found id: ""
	I0414 17:48:06.589166  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.589173  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:06.589177  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:06.589223  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:06.623771  213635 cri.go:89] found id: ""
	I0414 17:48:06.623808  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.623824  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:06.623833  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:06.623843  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:06.679003  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:06.679039  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:06.695303  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:06.695328  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:06.770562  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:06.770585  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:06.770597  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:06.850617  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:06.850652  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:05.782269  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:07.783336  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:06.387181  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:08.886322  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:09.145702  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:11.147099  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:09.390500  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:09.403827  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:09.403885  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:09.438395  213635 cri.go:89] found id: ""
	I0414 17:48:09.438420  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.438428  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:09.438434  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:09.438484  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:09.473071  213635 cri.go:89] found id: ""
	I0414 17:48:09.473098  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.473106  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:09.473112  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:09.473159  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:09.506175  213635 cri.go:89] found id: ""
	I0414 17:48:09.506205  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.506216  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:09.506223  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:09.506272  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:09.540488  213635 cri.go:89] found id: ""
	I0414 17:48:09.540511  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.540518  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:09.540523  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:09.540583  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:09.576189  213635 cri.go:89] found id: ""
	I0414 17:48:09.576222  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.576233  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:09.576241  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:09.576302  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:09.607908  213635 cri.go:89] found id: ""
	I0414 17:48:09.607937  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.607945  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:09.607950  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:09.608000  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:09.642069  213635 cri.go:89] found id: ""
	I0414 17:48:09.642098  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.642108  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:09.642115  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:09.642177  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:09.675434  213635 cri.go:89] found id: ""
	I0414 17:48:09.675463  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.675474  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:09.675484  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:09.675496  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:09.754118  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:09.754154  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:09.797336  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:09.797373  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:09.849366  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:09.849407  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:09.863427  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:09.863458  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:09.934735  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:12.435482  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:12.449310  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:12.449374  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:12.484115  213635 cri.go:89] found id: ""
	I0414 17:48:12.484143  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.484153  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:12.484160  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:12.484213  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:12.521972  213635 cri.go:89] found id: ""
	I0414 17:48:12.521994  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.522001  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:12.522012  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:12.522071  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:12.554192  213635 cri.go:89] found id: ""
	I0414 17:48:12.554219  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.554229  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:12.554237  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:12.554296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:12.587420  213635 cri.go:89] found id: ""
	I0414 17:48:12.587450  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.587460  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:12.587467  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:12.587526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:12.621562  213635 cri.go:89] found id: ""
	I0414 17:48:12.621588  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.621599  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:12.621608  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:12.621672  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:12.660123  213635 cri.go:89] found id: ""
	I0414 17:48:12.660147  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.660155  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:12.660160  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:12.660216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:12.693979  213635 cri.go:89] found id: ""
	I0414 17:48:12.694010  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.694021  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:12.694029  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:12.694097  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:12.728017  213635 cri.go:89] found id: ""
	I0414 17:48:12.728043  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.728051  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:12.728060  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:12.728072  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:12.782896  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:12.782927  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:12.795655  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:12.795679  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:12.865150  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:12.865183  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:12.865197  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:12.950645  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:12.950682  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:10.285784  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:12.781397  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:10.886362  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:12.888044  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:15.386245  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:13.646393  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:16.146335  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:16.640867  212456 pod_ready.go:82] duration metric: took 4m0.000569834s for pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace to be "Ready" ...
	E0414 17:48:16.640896  212456 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0414 17:48:16.640935  212456 pod_ready.go:39] duration metric: took 4m12.70748193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:16.640979  212456 kubeadm.go:597] duration metric: took 4m20.79960225s to restartPrimaryControlPlane
	W0414 17:48:16.641051  212456 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:48:16.641091  212456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:48:15.490793  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:15.504867  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:15.504941  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:15.538968  213635 cri.go:89] found id: ""
	I0414 17:48:15.538990  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.538998  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:15.539003  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:15.539049  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:15.573937  213635 cri.go:89] found id: ""
	I0414 17:48:15.573961  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.573968  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:15.573973  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:15.574019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:15.609320  213635 cri.go:89] found id: ""
	I0414 17:48:15.609346  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.609360  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:15.609367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:15.609425  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:15.641598  213635 cri.go:89] found id: ""
	I0414 17:48:15.641626  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.641635  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:15.641641  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:15.641695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:15.675213  213635 cri.go:89] found id: ""
	I0414 17:48:15.675239  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.675248  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:15.675255  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:15.675313  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:15.710542  213635 cri.go:89] found id: ""
	I0414 17:48:15.710565  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.710572  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:15.710578  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:15.710623  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:15.745699  213635 cri.go:89] found id: ""
	I0414 17:48:15.745724  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.745735  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:15.745742  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:15.745792  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:15.782559  213635 cri.go:89] found id: ""
	I0414 17:48:15.782586  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.782596  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:15.782605  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:15.782619  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:15.837926  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:15.837964  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:15.854293  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:15.854333  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:15.944741  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:15.944761  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:15.944773  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:16.032687  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:16.032716  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:14.784926  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:17.280964  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:17.886293  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:20.386161  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:18.574911  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:18.589009  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:18.589060  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:18.625705  213635 cri.go:89] found id: ""
	I0414 17:48:18.625730  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.625738  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:18.625743  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:18.625796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:18.659670  213635 cri.go:89] found id: ""
	I0414 17:48:18.659704  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.659713  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:18.659719  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:18.659762  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:18.694973  213635 cri.go:89] found id: ""
	I0414 17:48:18.694997  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.695005  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:18.695011  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:18.695083  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:18.733777  213635 cri.go:89] found id: ""
	I0414 17:48:18.733801  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.733808  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:18.733813  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:18.733881  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:18.765747  213635 cri.go:89] found id: ""
	I0414 17:48:18.765768  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.765775  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:18.765780  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:18.765856  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:18.799558  213635 cri.go:89] found id: ""
	I0414 17:48:18.799585  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.799595  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:18.799601  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:18.799653  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:18.835245  213635 cri.go:89] found id: ""
	I0414 17:48:18.835279  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.835291  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:18.835300  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:18.835354  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:18.870176  213635 cri.go:89] found id: ""
	I0414 17:48:18.870201  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.870212  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:18.870222  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:18.870236  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:18.883166  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:18.883195  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:18.946103  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:18.946128  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:18.946145  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:19.023462  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:19.023496  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:19.067254  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:19.067281  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:21.619412  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:21.635163  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:21.635233  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:21.671680  213635 cri.go:89] found id: ""
	I0414 17:48:21.671705  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.671713  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:21.671719  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:21.671767  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:21.709955  213635 cri.go:89] found id: ""
	I0414 17:48:21.709987  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.709998  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:21.710005  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:21.710064  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:21.743179  213635 cri.go:89] found id: ""
	I0414 17:48:21.743202  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.743209  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:21.743214  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:21.743267  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:21.775835  213635 cri.go:89] found id: ""
	I0414 17:48:21.775862  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.775870  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:21.775875  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:21.775920  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:21.810164  213635 cri.go:89] found id: ""
	I0414 17:48:21.810190  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.810201  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:21.810207  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:21.810253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:21.848616  213635 cri.go:89] found id: ""
	I0414 17:48:21.848639  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.848646  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:21.848651  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:21.848717  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:21.887985  213635 cri.go:89] found id: ""
	I0414 17:48:21.888014  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.888024  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:21.888030  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:21.888076  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:21.927965  213635 cri.go:89] found id: ""
	I0414 17:48:21.927992  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.928003  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:21.928013  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:21.928028  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:21.989253  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:21.989294  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:22.003399  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:22.003429  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:22.071849  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:22.071872  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:22.071889  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:22.149857  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:22.149888  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:19.283105  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:21.782995  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:22.388207  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:24.886911  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:24.691531  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:24.706169  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:24.706230  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:24.745747  213635 cri.go:89] found id: ""
	I0414 17:48:24.745780  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.745791  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:24.745799  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:24.745886  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:24.785261  213635 cri.go:89] found id: ""
	I0414 17:48:24.785284  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.785291  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:24.785296  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:24.785351  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:24.824491  213635 cri.go:89] found id: ""
	I0414 17:48:24.824525  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.824536  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:24.824550  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:24.824606  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:24.868655  213635 cri.go:89] found id: ""
	I0414 17:48:24.868683  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.868696  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:24.868704  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:24.868769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:24.910959  213635 cri.go:89] found id: ""
	I0414 17:48:24.910982  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.910989  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:24.910995  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:24.911053  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:24.944036  213635 cri.go:89] found id: ""
	I0414 17:48:24.944065  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.944073  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:24.944078  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:24.944127  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:24.977481  213635 cri.go:89] found id: ""
	I0414 17:48:24.977512  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.977522  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:24.977529  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:24.977589  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:25.010063  213635 cri.go:89] found id: ""
	I0414 17:48:25.010087  213635 logs.go:282] 0 containers: []
	W0414 17:48:25.010094  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:25.010103  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:25.010114  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:25.062645  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:25.062680  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:25.077120  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:25.077144  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:25.151533  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:25.151553  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:25.151565  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:25.230945  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:25.230985  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:27.774758  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:27.789640  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:27.789692  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:27.822128  213635 cri.go:89] found id: ""
	I0414 17:48:27.822162  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.822169  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:27.822175  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:27.822227  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:27.858364  213635 cri.go:89] found id: ""
	I0414 17:48:27.858394  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.858401  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:27.858406  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:27.858452  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:27.893587  213635 cri.go:89] found id: ""
	I0414 17:48:27.893618  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.893628  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:27.893636  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:27.893695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:27.930766  213635 cri.go:89] found id: ""
	I0414 17:48:27.930799  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.930810  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:27.930817  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:27.930879  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:27.962936  213635 cri.go:89] found id: ""
	I0414 17:48:27.962966  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.962977  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:27.962983  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:27.963036  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:27.999471  213635 cri.go:89] found id: ""
	I0414 17:48:27.999503  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.999511  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:27.999517  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:27.999575  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:28.030604  213635 cri.go:89] found id: ""
	I0414 17:48:28.030636  213635 logs.go:282] 0 containers: []
	W0414 17:48:28.030645  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:28.030650  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:28.030704  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:28.066407  213635 cri.go:89] found id: ""
	I0414 17:48:28.066436  213635 logs.go:282] 0 containers: []
	W0414 17:48:28.066446  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:28.066457  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:28.066471  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:28.118182  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:28.118210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:28.131007  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:28.131031  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:28.198468  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:28.198488  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:28.198500  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:24.283310  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:26.283749  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:27.386845  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:29.387642  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:28.286352  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:28.286387  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:30.826694  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:30.839877  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:30.839949  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:30.873980  213635 cri.go:89] found id: ""
	I0414 17:48:30.874010  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.874021  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:30.874028  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:30.874087  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:30.909567  213635 cri.go:89] found id: ""
	I0414 17:48:30.909593  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.909600  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:30.909606  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:30.909661  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:30.943382  213635 cri.go:89] found id: ""
	I0414 17:48:30.943414  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.943424  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:30.943431  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:30.943487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:30.976444  213635 cri.go:89] found id: ""
	I0414 17:48:30.976477  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.976488  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:30.976496  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:30.976555  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:31.010623  213635 cri.go:89] found id: ""
	I0414 17:48:31.010651  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.010662  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:31.010669  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:31.010727  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:31.049542  213635 cri.go:89] found id: ""
	I0414 17:48:31.049568  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.049578  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:31.049585  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:31.049646  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:31.082301  213635 cri.go:89] found id: ""
	I0414 17:48:31.082326  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.082336  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:31.082343  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:31.082403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:31.115742  213635 cri.go:89] found id: ""
	I0414 17:48:31.115768  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.115776  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:31.115784  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:31.115794  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:31.167568  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:31.167598  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:31.180202  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:31.180229  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:31.247958  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:31.247980  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:31.247995  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:31.337341  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:31.337379  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:28.780817  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:30.781721  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:32.782156  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:31.886992  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:34.386180  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:33.892139  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:33.905803  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:33.905884  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:33.945429  213635 cri.go:89] found id: ""
	I0414 17:48:33.945458  213635 logs.go:282] 0 containers: []
	W0414 17:48:33.945468  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:33.945476  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:33.945524  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:33.978018  213635 cri.go:89] found id: ""
	I0414 17:48:33.978047  213635 logs.go:282] 0 containers: []
	W0414 17:48:33.978056  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:33.978063  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:33.978113  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:34.013902  213635 cri.go:89] found id: ""
	I0414 17:48:34.013926  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.013934  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:34.013940  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:34.013986  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:34.052308  213635 cri.go:89] found id: ""
	I0414 17:48:34.052340  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.052351  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:34.052358  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:34.052423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:34.092541  213635 cri.go:89] found id: ""
	I0414 17:48:34.092565  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.092572  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:34.092577  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:34.092638  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:34.126690  213635 cri.go:89] found id: ""
	I0414 17:48:34.126725  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.126736  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:34.126745  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:34.126810  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:34.161043  213635 cri.go:89] found id: ""
	I0414 17:48:34.161072  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.161081  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:34.161087  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:34.161148  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:34.195793  213635 cri.go:89] found id: ""
	I0414 17:48:34.195817  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.195825  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:34.195835  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:34.195847  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:34.238858  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:34.238890  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:34.294092  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:34.294122  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:34.310473  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:34.310510  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:34.377489  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:34.377517  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:34.377535  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:36.963220  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:36.976594  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:36.976663  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:37.009685  213635 cri.go:89] found id: ""
	I0414 17:48:37.009710  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.009720  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:37.009727  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:37.009780  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:37.044805  213635 cri.go:89] found id: ""
	I0414 17:48:37.044832  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.044845  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:37.044852  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:37.044915  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:37.096059  213635 cri.go:89] found id: ""
	I0414 17:48:37.096082  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.096089  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:37.096094  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:37.096146  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:37.132630  213635 cri.go:89] found id: ""
	I0414 17:48:37.132654  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.132664  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:37.132670  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:37.132731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:37.168840  213635 cri.go:89] found id: ""
	I0414 17:48:37.168865  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.168874  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:37.168881  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:37.168940  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:37.202226  213635 cri.go:89] found id: ""
	I0414 17:48:37.202250  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.202258  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:37.202264  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:37.202321  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:37.236649  213635 cri.go:89] found id: ""
	I0414 17:48:37.236677  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.236687  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:37.236695  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:37.236758  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:37.270393  213635 cri.go:89] found id: ""
	I0414 17:48:37.270417  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.270427  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:37.270438  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:37.270454  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:37.320463  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:37.320492  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:37.334355  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:37.334388  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:37.402650  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:37.402674  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:37.402686  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:37.479961  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:37.479999  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:34.782317  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:37.285771  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:36.886679  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:39.386353  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:40.024993  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:40.038522  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:40.038578  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:40.075237  213635 cri.go:89] found id: ""
	I0414 17:48:40.075264  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.075274  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:40.075282  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:40.075342  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:40.117027  213635 cri.go:89] found id: ""
	I0414 17:48:40.117052  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.117059  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:40.117065  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:40.117130  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:40.150149  213635 cri.go:89] found id: ""
	I0414 17:48:40.150181  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.150193  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:40.150201  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:40.150265  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:40.185087  213635 cri.go:89] found id: ""
	I0414 17:48:40.185114  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.185122  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:40.185128  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:40.185179  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:40.219050  213635 cri.go:89] found id: ""
	I0414 17:48:40.219077  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.219084  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:40.219090  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:40.219137  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:40.252681  213635 cri.go:89] found id: ""
	I0414 17:48:40.252712  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.252723  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:40.252731  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:40.252796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:40.289524  213635 cri.go:89] found id: ""
	I0414 17:48:40.289551  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.289559  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:40.289564  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:40.289622  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:40.322952  213635 cri.go:89] found id: ""
	I0414 17:48:40.322986  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.322998  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:40.323009  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:40.323023  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:40.375012  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:40.375046  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:40.389868  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:40.389900  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:40.456829  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:40.456849  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:40.456861  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:40.537720  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:40.537759  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:43.079573  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:43.092754  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:43.092808  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:43.128097  213635 cri.go:89] found id: ""
	I0414 17:48:43.128131  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.128142  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:43.128150  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:43.128210  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:43.161361  213635 cri.go:89] found id: ""
	I0414 17:48:43.161391  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.161403  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:43.161410  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:43.161470  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:43.196698  213635 cri.go:89] found id: ""
	I0414 17:48:43.196780  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.196796  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:43.196807  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:43.196870  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:43.230687  213635 cri.go:89] found id: ""
	I0414 17:48:43.230717  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.230724  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:43.230729  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:43.230790  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:43.272118  213635 cri.go:89] found id: ""
	I0414 17:48:43.272143  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.272149  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:43.272155  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:43.272212  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:39.285905  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:41.782863  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:41.387417  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:43.886997  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:44.312670  212456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.671544959s)
	I0414 17:48:44.312762  212456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:48:44.332203  212456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:48:44.347886  212456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:48:44.360967  212456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:48:44.360988  212456 kubeadm.go:157] found existing configuration files:
	
	I0414 17:48:44.361036  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0414 17:48:44.374271  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:48:44.374334  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:48:44.391104  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0414 17:48:44.407332  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:48:44.407386  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:48:44.418237  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0414 17:48:44.427328  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:48:44.427373  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:48:44.437284  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0414 17:48:44.446412  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:48:44.446459  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:48:44.455796  212456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:48:44.629587  212456 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:48:43.305507  213635 cri.go:89] found id: ""
	I0414 17:48:43.305544  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.305557  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:43.305567  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:43.305667  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:43.342294  213635 cri.go:89] found id: ""
	I0414 17:48:43.342328  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.342339  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:43.342346  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:43.342403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:43.374476  213635 cri.go:89] found id: ""
	I0414 17:48:43.374502  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.374510  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:43.374519  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:43.374529  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:43.429817  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:43.429869  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:43.446168  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:43.446205  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:43.562603  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:43.562629  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:43.562647  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:43.647833  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:43.647873  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:46.192567  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:46.205502  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:46.205572  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:46.241592  213635 cri.go:89] found id: ""
	I0414 17:48:46.241618  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.241628  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:46.241635  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:46.241698  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:46.276977  213635 cri.go:89] found id: ""
	I0414 17:48:46.277004  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.277014  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:46.277020  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:46.277079  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:46.312906  213635 cri.go:89] found id: ""
	I0414 17:48:46.312930  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.312939  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:46.312946  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:46.313007  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:46.346994  213635 cri.go:89] found id: ""
	I0414 17:48:46.347018  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.347026  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:46.347031  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:46.347077  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:46.380069  213635 cri.go:89] found id: ""
	I0414 17:48:46.380093  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.380104  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:46.380111  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:46.380172  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:46.416546  213635 cri.go:89] found id: ""
	I0414 17:48:46.416574  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.416584  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:46.416592  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:46.416652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:46.453343  213635 cri.go:89] found id: ""
	I0414 17:48:46.453374  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.453386  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:46.453393  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:46.453447  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:46.490450  213635 cri.go:89] found id: ""
	I0414 17:48:46.490479  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.490489  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:46.490499  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:46.490513  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:46.551507  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:46.551542  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:46.565243  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:46.565272  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:46.636609  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:46.636634  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:46.636651  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:46.715829  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:46.715872  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:44.284758  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.782687  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.386592  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.880932  212269 pod_ready.go:82] duration metric: took 4m0.000148322s for pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace to be "Ready" ...
	E0414 17:48:46.880964  212269 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace to be "Ready" (will not retry!)
	I0414 17:48:46.880988  212269 pod_ready.go:39] duration metric: took 4m15.038784615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:46.881025  212269 kubeadm.go:597] duration metric: took 4m58.434849831s to restartPrimaryControlPlane
	W0414 17:48:46.881139  212269 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:48:46.881174  212269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:48:52.039840  212456 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:48:52.039919  212456 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:48:52.040033  212456 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:48:52.040172  212456 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:48:52.040311  212456 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:48:52.040403  212456 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:48:52.041680  212456 out.go:235]   - Generating certificates and keys ...
	I0414 17:48:52.041782  212456 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:48:52.041901  212456 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:48:52.042004  212456 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:48:52.042135  212456 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:48:52.042241  212456 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:48:52.042329  212456 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:48:52.042439  212456 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:48:52.042525  212456 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:48:52.042625  212456 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:48:52.042746  212456 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:48:52.042810  212456 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:48:52.042895  212456 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:48:52.042961  212456 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:48:52.043020  212456 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:48:52.043068  212456 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:48:52.043153  212456 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:48:52.043209  212456 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:48:52.043309  212456 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:48:52.043396  212456 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:48:52.044723  212456 out.go:235]   - Booting up control plane ...
	I0414 17:48:52.044821  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:48:52.044934  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:48:52.045009  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:48:52.045114  212456 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:48:52.045213  212456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:48:52.045252  212456 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:48:52.045398  212456 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:48:52.045503  212456 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:48:52.045581  212456 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.205474ms
	I0414 17:48:52.045662  212456 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:48:52.045714  212456 kubeadm.go:310] [api-check] The API server is healthy after 4.502044755s
	I0414 17:48:52.045804  212456 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:48:52.045996  212456 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:48:52.046104  212456 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:48:52.046335  212456 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-061428 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:48:52.046423  212456 kubeadm.go:310] [bootstrap-token] Using token: 0x0swo.cnocxvbqul1ca541
	I0414 17:48:52.047605  212456 out.go:235]   - Configuring RBAC rules ...
	I0414 17:48:52.047713  212456 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:48:52.047795  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:48:52.047959  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:48:52.048082  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:48:52.048237  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:48:52.048315  212456 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:48:52.048413  212456 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:48:52.048451  212456 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:48:52.048491  212456 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:48:52.048496  212456 kubeadm.go:310] 
	I0414 17:48:52.048549  212456 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:48:52.048555  212456 kubeadm.go:310] 
	I0414 17:48:52.048618  212456 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:48:52.048629  212456 kubeadm.go:310] 
	I0414 17:48:52.048653  212456 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:48:52.048710  212456 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:48:52.048756  212456 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:48:52.048762  212456 kubeadm.go:310] 
	I0414 17:48:52.048819  212456 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:48:52.048829  212456 kubeadm.go:310] 
	I0414 17:48:52.048872  212456 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:48:52.048878  212456 kubeadm.go:310] 
	I0414 17:48:52.048920  212456 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:48:52.048983  212456 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:48:52.049046  212456 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:48:52.049053  212456 kubeadm.go:310] 
	I0414 17:48:52.049156  212456 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:48:52.049245  212456 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:48:52.049251  212456 kubeadm.go:310] 
	I0414 17:48:52.049325  212456 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 0x0swo.cnocxvbqul1ca541 \
	I0414 17:48:52.049412  212456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:48:52.049431  212456 kubeadm.go:310] 	--control-plane 
	I0414 17:48:52.049437  212456 kubeadm.go:310] 
	I0414 17:48:52.049511  212456 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:48:52.049517  212456 kubeadm.go:310] 
	I0414 17:48:52.049584  212456 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 0x0swo.cnocxvbqul1ca541 \
	I0414 17:48:52.049724  212456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:48:52.049740  212456 cni.go:84] Creating CNI manager for ""
	I0414 17:48:52.049793  212456 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:48:52.051076  212456 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:48:52.052229  212456 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:48:52.062677  212456 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:48:52.080923  212456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:48:52.081020  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:52.081077  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-061428 minikube.k8s.io/updated_at=2025_04_14T17_48_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=default-k8s-diff-port-061428 minikube.k8s.io/primary=true
	I0414 17:48:52.125288  212456 ops.go:34] apiserver oom_adj: -16
	I0414 17:48:52.342710  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:52.842859  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:49.255006  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:49.277839  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:49.277915  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:49.340015  213635 cri.go:89] found id: ""
	I0414 17:48:49.340051  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.340063  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:49.340071  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:49.340143  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:49.375879  213635 cri.go:89] found id: ""
	I0414 17:48:49.375907  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.375917  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:49.375924  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:49.375987  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:49.408770  213635 cri.go:89] found id: ""
	I0414 17:48:49.408796  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.408806  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:49.408813  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:49.408877  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:49.446644  213635 cri.go:89] found id: ""
	I0414 17:48:49.446673  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.446682  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:49.446690  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:49.446758  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:49.486858  213635 cri.go:89] found id: ""
	I0414 17:48:49.486887  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.486897  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:49.486904  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:49.486964  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:49.525400  213635 cri.go:89] found id: ""
	I0414 17:48:49.525427  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.525437  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:49.525445  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:49.525507  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:49.559553  213635 cri.go:89] found id: ""
	I0414 17:48:49.559578  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.559587  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:49.559595  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:49.559656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:49.591090  213635 cri.go:89] found id: ""
	I0414 17:48:49.591123  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.591131  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:49.591144  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:49.591155  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:49.643807  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:49.643841  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:49.657066  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:49.657090  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:49.729359  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:49.729388  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:49.729404  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:49.808543  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:49.808573  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:52.348426  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:52.366010  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:52.366076  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:52.404950  213635 cri.go:89] found id: ""
	I0414 17:48:52.404976  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.404985  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:52.404991  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:52.405046  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:52.445893  213635 cri.go:89] found id: ""
	I0414 17:48:52.445927  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.445937  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:52.445945  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:52.446011  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:52.479635  213635 cri.go:89] found id: ""
	I0414 17:48:52.479657  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.479664  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:52.479671  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:52.479738  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:52.523616  213635 cri.go:89] found id: ""
	I0414 17:48:52.523650  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.523661  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:52.523669  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:52.523730  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:52.571706  213635 cri.go:89] found id: ""
	I0414 17:48:52.571739  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.571751  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:52.571758  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:52.571826  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:52.616799  213635 cri.go:89] found id: ""
	I0414 17:48:52.616822  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.616831  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:52.616836  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:52.616901  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:52.652373  213635 cri.go:89] found id: ""
	I0414 17:48:52.652402  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.652413  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:52.652420  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:52.652481  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:52.689582  213635 cri.go:89] found id: ""
	I0414 17:48:52.689614  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.689626  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:52.689637  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:52.689651  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:52.741215  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:52.741254  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:52.757324  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:52.757361  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:52.828589  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:52.828609  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:52.828621  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:52.918483  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:52.918527  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:49.290709  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:51.781114  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:53.343155  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:53.842838  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:54.343070  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:54.843789  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.342935  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.843502  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.939704  212456 kubeadm.go:1113] duration metric: took 3.858757705s to wait for elevateKubeSystemPrivileges
	I0414 17:48:55.939738  212456 kubeadm.go:394] duration metric: took 5m0.143792732s to StartCluster
	I0414 17:48:55.939772  212456 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:48:55.939872  212456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:48:55.941014  212456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:48:55.941300  212456 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:48:55.941438  212456 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:48:55.941538  212456 config.go:182] Loaded profile config "default-k8s-diff-port-061428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:48:55.941554  212456 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941576  212456 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941591  212456 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941600  212456 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-061428"
	I0414 17:48:55.941602  212456 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.941601  212456 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-061428"
	W0414 17:48:55.941614  212456 addons.go:247] addon dashboard should already be in state true
	I0414 17:48:55.941622  212456 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-061428"
	W0414 17:48:55.941645  212456 addons.go:247] addon metrics-server should already be in state true
	I0414 17:48:55.941654  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.941580  212456 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.941676  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	W0414 17:48:55.941703  212456 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:48:55.941749  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.942083  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942123  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942152  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942089  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942265  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942088  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942329  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942159  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.943212  212456 out.go:177] * Verifying Kubernetes components...
	I0414 17:48:55.944529  212456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:48:55.961205  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
	I0414 17:48:55.961205  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0414 17:48:55.961207  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0414 17:48:55.961746  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.961764  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.961872  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.962378  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962406  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962382  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962446  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962515  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962533  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962928  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963036  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963098  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.963185  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963383  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0414 17:48:55.963645  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.963676  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.963884  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.963930  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.964392  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.964780  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.964796  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.965235  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.965735  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.965770  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.966920  212456 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-061428"
	W0414 17:48:55.966941  212456 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:48:55.966965  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.967303  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.967339  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.981120  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0414 17:48:55.981603  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.982500  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.982521  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.982919  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.983222  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.983374  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0414 17:48:55.983676  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.987256  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.987275  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0414 17:48:55.987392  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.987404  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.987825  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.988138  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.988179  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.988192  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.988507  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.988780  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.988791  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.989758  212456 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:48:55.991265  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:48:55.991271  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.991283  212456 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:48:55.991300  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:55.992806  212456 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:48:55.993944  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.995202  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:55.995700  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:55.995715  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:55.995878  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:55.995970  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:55.996048  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:55.996310  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:55.998615  212456 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:48:55.998632  212456 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:48:55.999859  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:48:55.999877  212456 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:48:55.999893  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.000008  212456 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:48:56.000031  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:48:56.000048  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.003728  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004208  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.004226  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004232  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004445  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.004661  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.004738  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.004762  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004788  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.004926  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.005143  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.005294  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.005400  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.005546  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.015091  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0414 17:48:56.015439  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:56.015805  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:56.015814  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:56.016147  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:56.016520  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:56.016543  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:56.032058  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0414 17:48:56.032451  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:56.032966  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:56.032988  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:56.033343  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:56.033531  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:56.035026  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:56.035244  212456 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:48:56.035267  212456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:48:56.035289  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.037961  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.039361  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.039393  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.042043  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.042282  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.044137  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.044613  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.170857  212456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:48:56.201264  212456 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-061428" to be "Ready" ...
	I0414 17:48:56.215666  212456 node_ready.go:49] node "default-k8s-diff-port-061428" has status "Ready":"True"
	I0414 17:48:56.215687  212456 node_ready.go:38] duration metric: took 14.390119ms for node "default-k8s-diff-port-061428" to be "Ready" ...
	I0414 17:48:56.215698  212456 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:56.219556  212456 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:48:56.325515  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:48:56.328344  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:48:56.328369  212456 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:48:56.366616  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:48:56.366644  212456 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:48:56.366924  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:48:56.366947  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:48:56.400343  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:48:56.400365  212456 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:48:56.403134  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:48:56.450599  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:48:56.450631  212456 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:48:56.474003  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:48:56.474030  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:48:56.564681  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:48:56.564716  212456 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:48:56.565092  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:48:56.565114  212456 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:48:56.634647  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:48:56.667139  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:48:56.667170  212456 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:48:56.800483  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:48:56.800513  212456 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:48:56.844350  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:48:56.844380  212456 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:48:56.924656  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:48:56.924693  212456 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:48:57.009703  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:48:57.322557  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322593  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322574  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322695  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322923  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.322939  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.322953  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322961  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322979  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.322998  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.323007  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.323016  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.324913  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.324970  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.324986  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.324997  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.325005  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.325019  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.345450  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.345469  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.345740  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.345761  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.943361  212456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.308667432s)
	I0414 17:48:57.943408  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.943422  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.943797  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.943831  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.943842  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.943851  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.943880  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.944243  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.944262  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.944275  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.944294  212456 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.461925  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:55.475396  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:55.475472  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:55.511338  213635 cri.go:89] found id: ""
	I0414 17:48:55.511366  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.511374  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:55.511381  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:55.511444  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:55.547324  213635 cri.go:89] found id: ""
	I0414 17:48:55.547348  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.547355  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:55.547366  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:55.547423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:55.593274  213635 cri.go:89] found id: ""
	I0414 17:48:55.593303  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.593314  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:55.593322  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:55.593386  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:55.628013  213635 cri.go:89] found id: ""
	I0414 17:48:55.628042  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.628053  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:55.628060  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:55.628127  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:55.663752  213635 cri.go:89] found id: ""
	I0414 17:48:55.663786  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.663798  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:55.663805  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:55.663867  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:55.700578  213635 cri.go:89] found id: ""
	I0414 17:48:55.700601  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.700609  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:55.700614  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:55.700661  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:55.733772  213635 cri.go:89] found id: ""
	I0414 17:48:55.733797  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.733805  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:55.733811  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:55.733891  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:55.769135  213635 cri.go:89] found id: ""
	I0414 17:48:55.769161  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.769174  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:55.769184  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:55.769196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:55.810526  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:55.810560  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:55.863132  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:55.863166  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:55.879346  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:55.879381  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:55.961385  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:55.961403  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:55.961418  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:53.781674  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:55.784266  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.283947  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.225462  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:59.380615  212456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.370840717s)
	I0414 17:48:59.380686  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:59.380701  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:59.381003  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:59.381024  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:59.381039  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:59.381047  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:59.381256  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:59.381286  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:59.381299  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:59.382695  212456 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-061428 addons enable metrics-server
	
	I0414 17:48:59.383922  212456 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0414 17:48:59.385040  212456 addons.go:514] duration metric: took 3.443627022s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0414 17:49:00.227357  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:02.723936  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.566639  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:58.580841  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:58.580906  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:58.620613  213635 cri.go:89] found id: ""
	I0414 17:48:58.620647  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.620659  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:58.620668  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:58.620736  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:58.661513  213635 cri.go:89] found id: ""
	I0414 17:48:58.661549  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.661559  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:58.661567  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:58.661637  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:58.710480  213635 cri.go:89] found id: ""
	I0414 17:48:58.710512  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.710524  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:58.710531  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:58.710594  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:58.755300  213635 cri.go:89] found id: ""
	I0414 17:48:58.755328  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.755339  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:58.755346  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:58.755403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:58.791364  213635 cri.go:89] found id: ""
	I0414 17:48:58.791396  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.791416  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:58.791424  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:58.791490  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:58.830571  213635 cri.go:89] found id: ""
	I0414 17:48:58.830598  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.830610  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:58.830617  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:58.830677  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:58.864897  213635 cri.go:89] found id: ""
	I0414 17:48:58.864924  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.864933  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:58.864940  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:58.865000  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:58.900362  213635 cri.go:89] found id: ""
	I0414 17:48:58.900393  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.900403  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:58.900414  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:58.900431  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:58.953300  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:58.953340  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:58.974592  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:58.974634  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:59.054206  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:59.054234  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:59.054251  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:59.137354  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:59.137390  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:01.684252  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:01.702697  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:01.702776  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:01.746204  213635 cri.go:89] found id: ""
	I0414 17:49:01.746232  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.746276  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:01.746284  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:01.746347  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:01.784544  213635 cri.go:89] found id: ""
	I0414 17:49:01.784574  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.784584  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:01.784591  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:01.784649  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:01.821353  213635 cri.go:89] found id: ""
	I0414 17:49:01.821382  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.821392  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:01.821399  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:01.821454  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:01.855681  213635 cri.go:89] found id: ""
	I0414 17:49:01.855707  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.855715  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:01.855723  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:01.855783  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:01.891114  213635 cri.go:89] found id: ""
	I0414 17:49:01.891142  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.891153  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:01.891161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:01.891230  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:01.926536  213635 cri.go:89] found id: ""
	I0414 17:49:01.926570  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.926581  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:01.926588  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:01.926648  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:01.971430  213635 cri.go:89] found id: ""
	I0414 17:49:01.971455  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.971462  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:01.971468  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:01.971513  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:02.010416  213635 cri.go:89] found id: ""
	I0414 17:49:02.010444  213635 logs.go:282] 0 containers: []
	W0414 17:49:02.010452  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:02.010461  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:02.010476  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:02.093422  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:02.093451  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:02.093468  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:02.175219  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:02.175256  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:02.216929  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:02.216957  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:02.269151  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:02.269188  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:00.784029  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:03.284820  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:03.725360  212456 pod_ready.go:93] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.725386  212456 pod_ready.go:82] duration metric: took 7.505806576s for pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.725396  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.729623  212456 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.729653  212456 pod_ready.go:82] duration metric: took 4.248954ms for pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.729668  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.733261  212456 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.733283  212456 pod_ready.go:82] duration metric: took 3.605315ms for pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.733294  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:04.239874  212456 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:04.239896  212456 pod_ready.go:82] duration metric: took 506.59428ms for pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:04.239904  212456 pod_ready.go:39] duration metric: took 8.024194625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:04.239919  212456 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:49:04.239968  212456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:04.262907  212456 api_server.go:72] duration metric: took 8.321571945s to wait for apiserver process to appear ...
	I0414 17:49:04.262930  212456 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:49:04.262950  212456 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I0414 17:49:04.267486  212456 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I0414 17:49:04.268404  212456 api_server.go:141] control plane version: v1.32.2
	I0414 17:49:04.268420  212456 api_server.go:131] duration metric: took 5.484737ms to wait for apiserver health ...
	I0414 17:49:04.268432  212456 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:49:04.271870  212456 system_pods.go:59] 9 kube-system pods found
	I0414 17:49:04.271899  212456 system_pods.go:61] "coredns-668d6bf9bc-mdntl" [009622fa-7c7c-4903-945f-d2bbf5262a9b] Running
	I0414 17:49:04.271908  212456 system_pods.go:61] "coredns-668d6bf9bc-qhjnc" [97f585f4-e039-4c34-b132-9a56318e7ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:49:04.271918  212456 system_pods.go:61] "etcd-default-k8s-diff-port-061428" [3f7f2d5f-ae4c-4946-952c-9aae0156cf95] Running
	I0414 17:49:04.271924  212456 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-061428" [accdcd02-d8e2-447c-83f2-a6cd0b935b7b] Running
	I0414 17:49:04.271928  212456 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-061428" [08894510-d41c-4e93-b1a9-43888732429b] Running
	I0414 17:49:04.271931  212456 system_pods.go:61] "kube-proxy-2ft7c" [7d0e0148-267c-4421-846e-7d2f8f2f3a14] Running
	I0414 17:49:04.271935  212456 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-061428" [9d32a872-0f66-4f25-81f1-9707372dbc6f] Running
	I0414 17:49:04.271939  212456 system_pods.go:61] "metrics-server-f79f97bbb-g2k8m" [b02b8a70-ae5c-4677-83b5-b817fc733882] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:04.271945  212456 system_pods.go:61] "storage-provisioner" [4d1ccb5e-58d4-43ea-aca2-885ad7af9484] Running
	I0414 17:49:04.271951  212456 system_pods.go:74] duration metric: took 3.508628ms to wait for pod list to return data ...
	I0414 17:49:04.271959  212456 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:49:04.274062  212456 default_sa.go:45] found service account: "default"
	I0414 17:49:04.274080  212456 default_sa.go:55] duration metric: took 2.11536ms for default service account to be created ...
	I0414 17:49:04.274086  212456 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:49:04.324903  212456 system_pods.go:86] 9 kube-system pods found
	I0414 17:49:04.324934  212456 system_pods.go:89] "coredns-668d6bf9bc-mdntl" [009622fa-7c7c-4903-945f-d2bbf5262a9b] Running
	I0414 17:49:04.324947  212456 system_pods.go:89] "coredns-668d6bf9bc-qhjnc" [97f585f4-e039-4c34-b132-9a56318e7ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:49:04.324954  212456 system_pods.go:89] "etcd-default-k8s-diff-port-061428" [3f7f2d5f-ae4c-4946-952c-9aae0156cf95] Running
	I0414 17:49:04.324963  212456 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-061428" [accdcd02-d8e2-447c-83f2-a6cd0b935b7b] Running
	I0414 17:49:04.324968  212456 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-061428" [08894510-d41c-4e93-b1a9-43888732429b] Running
	I0414 17:49:04.324974  212456 system_pods.go:89] "kube-proxy-2ft7c" [7d0e0148-267c-4421-846e-7d2f8f2f3a14] Running
	I0414 17:49:04.324979  212456 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-061428" [9d32a872-0f66-4f25-81f1-9707372dbc6f] Running
	I0414 17:49:04.324987  212456 system_pods.go:89] "metrics-server-f79f97bbb-g2k8m" [b02b8a70-ae5c-4677-83b5-b817fc733882] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:04.324993  212456 system_pods.go:89] "storage-provisioner" [4d1ccb5e-58d4-43ea-aca2-885ad7af9484] Running
	I0414 17:49:04.325002  212456 system_pods.go:126] duration metric: took 50.910972ms to wait for k8s-apps to be running ...
	I0414 17:49:04.325021  212456 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:49:04.325080  212456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:04.339750  212456 system_svc.go:56] duration metric: took 14.732403ms WaitForService to wait for kubelet
	I0414 17:49:04.339775  212456 kubeadm.go:582] duration metric: took 8.398444377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:49:04.339798  212456 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:49:04.524559  212456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:49:04.524654  212456 node_conditions.go:123] node cpu capacity is 2
	I0414 17:49:04.524675  212456 node_conditions.go:105] duration metric: took 184.870799ms to run NodePressure ...
	I0414 17:49:04.524690  212456 start.go:241] waiting for startup goroutines ...
	I0414 17:49:04.524701  212456 start.go:246] waiting for cluster config update ...
	I0414 17:49:04.524721  212456 start.go:255] writing updated cluster config ...
	I0414 17:49:04.525044  212456 ssh_runner.go:195] Run: rm -f paused
	I0414 17:49:04.582311  212456 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:49:04.584154  212456 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-061428" cluster and "default" namespace by default
	I0414 17:49:04.787535  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:04.801528  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:04.801604  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:04.838408  213635 cri.go:89] found id: ""
	I0414 17:49:04.838442  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.838458  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:04.838466  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:04.838529  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:04.888614  213635 cri.go:89] found id: ""
	I0414 17:49:04.888645  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.888658  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:04.888667  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:04.888720  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:04.931279  213635 cri.go:89] found id: ""
	I0414 17:49:04.931307  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.931317  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:04.931325  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:04.931461  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:04.970024  213635 cri.go:89] found id: ""
	I0414 17:49:04.970052  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.970061  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:04.970069  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:04.970138  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:05.012914  213635 cri.go:89] found id: ""
	I0414 17:49:05.012938  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.012958  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:05.012967  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:05.013027  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:05.050788  213635 cri.go:89] found id: ""
	I0414 17:49:05.050811  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.050834  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:05.050842  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:05.050905  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:05.090988  213635 cri.go:89] found id: ""
	I0414 17:49:05.091017  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.091028  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:05.091036  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:05.091101  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:05.127104  213635 cri.go:89] found id: ""
	I0414 17:49:05.127138  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.127149  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:05.127160  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:05.127176  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:05.143792  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:05.143828  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:05.218655  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:05.218680  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:05.218697  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:05.306169  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:05.306201  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:05.347150  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:05.347190  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:07.907355  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:07.920775  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:07.920854  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:07.958486  213635 cri.go:89] found id: ""
	I0414 17:49:07.958517  213635 logs.go:282] 0 containers: []
	W0414 17:49:07.958527  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:07.958534  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:07.958600  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:07.995351  213635 cri.go:89] found id: ""
	I0414 17:49:07.995383  213635 logs.go:282] 0 containers: []
	W0414 17:49:07.995394  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:07.995401  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:07.995464  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:08.031830  213635 cri.go:89] found id: ""
	I0414 17:49:08.031864  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.031876  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:08.031885  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:08.031953  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:08.072277  213635 cri.go:89] found id: ""
	I0414 17:49:08.072308  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.072321  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:08.072328  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:08.072400  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:08.107778  213635 cri.go:89] found id: ""
	I0414 17:49:08.107811  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.107823  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:08.107832  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:08.107889  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:08.144220  213635 cri.go:89] found id: ""
	I0414 17:49:08.144254  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.144267  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:08.144276  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:08.144350  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:08.199205  213635 cri.go:89] found id: ""
	I0414 17:49:08.199238  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.199251  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:08.199260  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:08.199329  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:08.236929  213635 cri.go:89] found id: ""
	I0414 17:49:08.236966  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.236978  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:08.236989  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:08.237006  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:05.781883  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:07.782747  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:08.288285  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:08.288309  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:08.301531  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:08.301562  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:08.370610  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:08.370643  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:08.370663  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:08.449517  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:08.449559  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:10.989149  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:11.004705  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:11.004776  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:11.044842  213635 cri.go:89] found id: ""
	I0414 17:49:11.044872  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.044882  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:11.044889  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:11.044944  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:11.079268  213635 cri.go:89] found id: ""
	I0414 17:49:11.079296  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.079306  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:11.079313  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:11.079373  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:11.111894  213635 cri.go:89] found id: ""
	I0414 17:49:11.111921  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.111931  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:11.111937  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:11.111993  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:11.147005  213635 cri.go:89] found id: ""
	I0414 17:49:11.147029  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.147039  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:11.147046  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:11.147115  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:11.181246  213635 cri.go:89] found id: ""
	I0414 17:49:11.181274  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.181281  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:11.181286  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:11.181333  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:11.222368  213635 cri.go:89] found id: ""
	I0414 17:49:11.222396  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.222404  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:11.222409  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:11.222455  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:11.262336  213635 cri.go:89] found id: ""
	I0414 17:49:11.262360  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.262367  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:11.262373  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:11.262430  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:11.305115  213635 cri.go:89] found id: ""
	I0414 17:49:11.305146  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.305157  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:11.305168  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:11.305180  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:11.340697  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:11.340726  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:11.390526  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:11.390566  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:11.403671  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:11.403699  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:11.478187  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:11.478210  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:11.478225  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:10.282583  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:12.781281  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:14.950237  212269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.069030835s)
	I0414 17:49:14.950306  212269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:14.971834  212269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:49:14.987342  212269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:49:15.000668  212269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:49:15.000687  212269 kubeadm.go:157] found existing configuration files:
	
	I0414 17:49:15.000752  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:49:15.020443  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:49:15.020492  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:49:15.037229  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:49:15.049591  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:49:15.049642  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:49:15.059769  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:49:15.077786  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:49:15.077853  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:49:15.089728  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:49:15.100674  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:49:15.100715  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:49:15.111637  212269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:49:15.291703  212269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:49:14.068187  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:14.082429  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:14.082502  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:14.118294  213635 cri.go:89] found id: ""
	I0414 17:49:14.118322  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.118333  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:14.118339  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:14.118399  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:14.150631  213635 cri.go:89] found id: ""
	I0414 17:49:14.150661  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.150673  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:14.150680  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:14.150739  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:14.182138  213635 cri.go:89] found id: ""
	I0414 17:49:14.182168  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.182178  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:14.182191  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:14.182245  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:14.215897  213635 cri.go:89] found id: ""
	I0414 17:49:14.215926  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.215939  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:14.215945  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:14.216007  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:14.250709  213635 cri.go:89] found id: ""
	I0414 17:49:14.250735  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.250745  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:14.250752  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:14.250827  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:14.284335  213635 cri.go:89] found id: ""
	I0414 17:49:14.284359  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.284369  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:14.284377  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:14.284437  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:14.320670  213635 cri.go:89] found id: ""
	I0414 17:49:14.320695  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.320705  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:14.320712  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:14.320772  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:14.352588  213635 cri.go:89] found id: ""
	I0414 17:49:14.352612  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.352620  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:14.352630  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:14.352643  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:14.402495  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:14.402527  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:14.415185  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:14.415211  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:14.484937  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:14.484961  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:14.484976  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:14.568927  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:14.568962  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:17.105989  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:17.119732  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:17.119803  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:17.155999  213635 cri.go:89] found id: ""
	I0414 17:49:17.156027  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.156038  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:17.156046  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:17.156117  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:17.190158  213635 cri.go:89] found id: ""
	I0414 17:49:17.190180  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.190188  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:17.190193  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:17.190254  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:17.228075  213635 cri.go:89] found id: ""
	I0414 17:49:17.228116  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.228128  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:17.228135  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:17.228199  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:17.276284  213635 cri.go:89] found id: ""
	I0414 17:49:17.276311  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.276321  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:17.276328  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:17.276391  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:17.323644  213635 cri.go:89] found id: ""
	I0414 17:49:17.323672  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.323684  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:17.323691  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:17.323755  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:17.361870  213635 cri.go:89] found id: ""
	I0414 17:49:17.361898  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.361910  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:17.361917  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:17.361978  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:17.396346  213635 cri.go:89] found id: ""
	I0414 17:49:17.396371  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.396382  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:17.396389  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:17.396450  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:17.434395  213635 cri.go:89] found id: ""
	I0414 17:49:17.434425  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.434434  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:17.434445  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:17.434460  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:17.486946  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:17.486987  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:17.504167  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:17.504200  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:17.596627  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:17.596655  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:17.596671  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:17.688874  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:17.688911  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:15.285389  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:17.783942  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:20.238457  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:20.252780  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:20.252859  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:20.299511  213635 cri.go:89] found id: ""
	I0414 17:49:20.299535  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.299543  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:20.299549  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:20.299607  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:20.346458  213635 cri.go:89] found id: ""
	I0414 17:49:20.346484  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.346493  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:20.346500  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:20.346552  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:20.390657  213635 cri.go:89] found id: ""
	I0414 17:49:20.390677  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.390684  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:20.390689  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:20.390738  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:20.435444  213635 cri.go:89] found id: ""
	I0414 17:49:20.435468  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.435474  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:20.435480  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:20.435520  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:20.470010  213635 cri.go:89] found id: ""
	I0414 17:49:20.470030  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.470036  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:20.470044  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:20.470089  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:20.517097  213635 cri.go:89] found id: ""
	I0414 17:49:20.517130  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.517141  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:20.517149  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:20.517216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:20.558688  213635 cri.go:89] found id: ""
	I0414 17:49:20.558717  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.558727  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:20.558733  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:20.558796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:20.598644  213635 cri.go:89] found id: ""
	I0414 17:49:20.598679  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.598687  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:20.598695  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:20.598706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:20.674514  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:20.674571  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:20.691779  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:20.691808  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:20.759608  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:20.759640  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:20.759652  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:20.852072  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:20.852104  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:23.435254  212269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:49:23.435346  212269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:49:23.435469  212269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:49:23.435587  212269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:49:23.435698  212269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:49:23.435786  212269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:49:23.437325  212269 out.go:235]   - Generating certificates and keys ...
	I0414 17:49:23.437460  212269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:49:23.437553  212269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:49:23.437665  212269 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:49:23.437786  212269 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:49:23.437914  212269 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:49:23.438026  212269 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:49:23.438157  212269 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:49:23.438253  212269 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:49:23.438370  212269 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:49:23.438493  212269 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:49:23.438556  212269 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:49:23.438629  212269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:49:23.438700  212269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:49:23.438783  212269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:49:23.438855  212269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:49:23.438939  212269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:49:23.439013  212269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:49:23.439123  212269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:49:23.439213  212269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:49:23.440637  212269 out.go:235]   - Booting up control plane ...
	I0414 17:49:23.440748  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:49:23.440847  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:49:23.440957  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:49:23.441124  212269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:49:23.441250  212269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:49:23.441317  212269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:49:23.441508  212269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:49:23.441668  212269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:49:23.441883  212269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001443308s
	I0414 17:49:23.442009  212269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:49:23.442095  212269 kubeadm.go:310] [api-check] The API server is healthy after 5.001630109s
	I0414 17:49:23.442250  212269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:49:23.442407  212269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:49:23.442500  212269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:49:23.442809  212269 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-721806 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:49:23.442894  212269 kubeadm.go:310] [bootstrap-token] Using token: hi4egh.pplxy8fivi6fy4jt
	I0414 17:49:23.444130  212269 out.go:235]   - Configuring RBAC rules ...
	I0414 17:49:23.444269  212269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:49:23.444373  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:49:23.444555  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:49:23.444724  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:49:23.444870  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:49:23.444983  212269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:49:23.445140  212269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:49:23.445205  212269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:49:23.445269  212269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:49:23.445279  212269 kubeadm.go:310] 
	I0414 17:49:23.445361  212269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:49:23.445373  212269 kubeadm.go:310] 
	I0414 17:49:23.445471  212269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:49:23.445483  212269 kubeadm.go:310] 
	I0414 17:49:23.445514  212269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:49:23.445592  212269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:49:23.445659  212269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:49:23.445669  212269 kubeadm.go:310] 
	I0414 17:49:23.445746  212269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:49:23.445756  212269 kubeadm.go:310] 
	I0414 17:49:23.445816  212269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:49:23.445896  212269 kubeadm.go:310] 
	I0414 17:49:23.445976  212269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:49:23.446046  212269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:49:23.446113  212269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:49:23.446122  212269 kubeadm.go:310] 
	I0414 17:49:23.446188  212269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:49:23.446250  212269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:49:23.446255  212269 kubeadm.go:310] 
	I0414 17:49:23.446323  212269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hi4egh.pplxy8fivi6fy4jt \
	I0414 17:49:23.446414  212269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:49:23.446434  212269 kubeadm.go:310] 	--control-plane 
	I0414 17:49:23.446438  212269 kubeadm.go:310] 
	I0414 17:49:23.446507  212269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:49:23.446513  212269 kubeadm.go:310] 
	I0414 17:49:23.446582  212269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hi4egh.pplxy8fivi6fy4jt \
	I0414 17:49:23.446707  212269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:49:23.446730  212269 cni.go:84] Creating CNI manager for ""
	I0414 17:49:23.446739  212269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:49:23.448085  212269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:49:20.288087  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:22.783079  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:23.449087  212269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:49:23.461577  212269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:49:23.480701  212269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:49:23.480761  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:23.480789  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-721806 minikube.k8s.io/updated_at=2025_04_14T17_49_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=no-preload-721806 minikube.k8s.io/primary=true
	I0414 17:49:23.822239  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:23.822379  212269 ops.go:34] apiserver oom_adj: -16
	I0414 17:49:24.322913  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:24.822958  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:25.322967  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:25.823342  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:26.322688  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:26.822585  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.322370  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.823299  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.966937  212269 kubeadm.go:1113] duration metric: took 4.486233002s to wait for elevateKubeSystemPrivileges
	I0414 17:49:27.966971  212269 kubeadm.go:394] duration metric: took 5m39.576838178s to StartCluster
	I0414 17:49:27.966992  212269 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:49:27.967081  212269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:49:27.968121  212269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:49:27.968336  212269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:49:27.968477  212269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:49:27.968572  212269 config.go:182] Loaded profile config "no-preload-721806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:49:27.968640  212269 addons.go:69] Setting storage-provisioner=true in profile "no-preload-721806"
	I0414 17:49:27.968663  212269 addons.go:238] Setting addon storage-provisioner=true in "no-preload-721806"
	I0414 17:49:27.968667  212269 addons.go:69] Setting default-storageclass=true in profile "no-preload-721806"
	I0414 17:49:27.968685  212269 addons.go:69] Setting dashboard=true in profile "no-preload-721806"
	I0414 17:49:27.968689  212269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-721806"
	W0414 17:49:27.968693  212269 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:49:27.968698  212269 addons.go:69] Setting metrics-server=true in profile "no-preload-721806"
	I0414 17:49:27.968701  212269 addons.go:238] Setting addon dashboard=true in "no-preload-721806"
	W0414 17:49:27.968711  212269 addons.go:247] addon dashboard should already be in state true
	I0414 17:49:27.968713  212269 addons.go:238] Setting addon metrics-server=true in "no-preload-721806"
	W0414 17:49:27.968720  212269 addons.go:247] addon metrics-server should already be in state true
	I0414 17:49:27.968725  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.968737  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.968748  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.969136  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969159  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969174  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969190  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969136  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969242  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969294  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969328  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969547  212269 out.go:177] * Verifying Kubernetes components...
	I0414 17:49:27.970928  212269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:49:27.985862  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0414 17:49:27.985940  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0414 17:49:27.986359  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.986478  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.986876  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.986894  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.987035  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.987050  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.987339  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.987522  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:27.987561  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.988294  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.988321  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.988647  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I0414 17:49:27.989258  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.990683  212269 addons.go:238] Setting addon default-storageclass=true in "no-preload-721806"
	W0414 17:49:27.990703  212269 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:49:27.990734  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.991093  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.991124  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.991371  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0414 17:49:27.991468  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.991483  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.991880  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.992418  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.992453  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.992667  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.993166  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.993181  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.993592  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.994151  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.994179  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:28.006693  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34701
	I0414 17:49:28.006725  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0414 17:49:28.007104  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.007150  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.007487  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.007500  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.007611  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.007630  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.007860  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.008020  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.008067  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.008548  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:28.008586  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:28.010355  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.011939  212269 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:49:28.012527  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0414 17:49:28.013128  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.013676  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.013704  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.013896  212269 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:49:28.014150  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.014326  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.014618  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0414 17:49:28.014827  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:49:28.014838  212269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:49:28.014860  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.015140  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.015587  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.015603  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.016012  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.016211  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.016728  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.018254  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.018509  212269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:49:28.018914  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.019375  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.019390  212269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:49:23.392749  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:23.409465  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:23.409526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:23.449515  213635 cri.go:89] found id: ""
	I0414 17:49:23.449542  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.449552  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:23.449559  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:23.449609  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:23.490201  213635 cri.go:89] found id: ""
	I0414 17:49:23.490225  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.490234  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:23.490242  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:23.490294  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:23.528644  213635 cri.go:89] found id: ""
	I0414 17:49:23.528673  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.528684  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:23.528692  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:23.528755  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:23.572217  213635 cri.go:89] found id: ""
	I0414 17:49:23.572245  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.572256  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:23.572263  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:23.572319  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:23.612901  213635 cri.go:89] found id: ""
	I0414 17:49:23.612922  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.612930  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:23.612936  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:23.612981  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:23.668230  213635 cri.go:89] found id: ""
	I0414 17:49:23.668256  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.668265  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:23.668271  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:23.668322  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:23.714238  213635 cri.go:89] found id: ""
	I0414 17:49:23.714265  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.714275  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:23.714282  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:23.714331  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:23.763817  213635 cri.go:89] found id: ""
	I0414 17:49:23.763853  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.763863  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:23.763872  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:23.763884  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:23.836441  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:23.836486  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:23.861896  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:23.861940  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:23.944757  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:23.944787  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:23.944806  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:24.029884  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:24.029923  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:26.571950  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:26.585122  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:26.585180  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:26.623368  213635 cri.go:89] found id: ""
	I0414 17:49:26.623392  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.623401  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:26.623409  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:26.623463  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:26.657588  213635 cri.go:89] found id: ""
	I0414 17:49:26.657624  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.657635  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:26.657642  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:26.657699  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:26.690827  213635 cri.go:89] found id: ""
	I0414 17:49:26.690854  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.690862  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:26.690867  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:26.690916  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:26.732830  213635 cri.go:89] found id: ""
	I0414 17:49:26.732866  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.732876  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:26.732883  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:26.732946  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:26.767719  213635 cri.go:89] found id: ""
	I0414 17:49:26.767770  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.767783  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:26.767793  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:26.767861  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:26.805504  213635 cri.go:89] found id: ""
	I0414 17:49:26.805531  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.805540  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:26.805547  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:26.805607  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:26.848736  213635 cri.go:89] found id: ""
	I0414 17:49:26.848761  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.848769  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:26.848774  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:26.848831  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:26.888964  213635 cri.go:89] found id: ""
	I0414 17:49:26.888996  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.889006  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:26.889017  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:26.889030  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:26.902789  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:26.902819  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:26.984479  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:26.984503  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:26.984516  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:27.072453  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:27.072491  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:27.114247  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:27.114282  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:25.282623  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:27.781278  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:28.019381  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:49:28.019465  212269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:49:28.019483  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.019407  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.019634  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.019797  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.019918  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.020024  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.020513  212269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:49:28.020530  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:49:28.020546  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.024119  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.024370  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.024926  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.024940  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.024945  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.025142  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.025307  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.025318  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.025337  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.025447  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.025773  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.025953  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.026140  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.026298  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.028168  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0414 17:49:28.028575  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.028954  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.028977  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.029414  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.029592  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.031192  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.031456  212269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:49:28.031470  212269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:49:28.031486  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.034539  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.034997  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.035014  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.035149  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.035305  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.035463  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.035588  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.215025  212269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:49:28.277431  212269 node_ready.go:35] waiting up to 6m0s for node "no-preload-721806" to be "Ready" ...
	I0414 17:49:28.311336  212269 node_ready.go:49] node "no-preload-721806" has status "Ready":"True"
	I0414 17:49:28.311360  212269 node_ready.go:38] duration metric: took 33.901113ms for node "no-preload-721806" to be "Ready" ...
	I0414 17:49:28.311374  212269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:28.317467  212269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:28.374855  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:49:28.390490  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:49:28.390513  212269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:49:28.406595  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:49:28.437361  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:49:28.437392  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:49:28.469744  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:49:28.469782  212269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:49:28.521154  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:49:28.521179  212269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:49:28.548853  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:49:28.548878  212269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:49:28.614511  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:49:28.614541  212269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:49:28.649638  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:49:28.649661  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:49:28.703339  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:49:28.777954  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:49:28.777987  212269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:49:28.845025  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.845054  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.845362  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.845380  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.845392  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.845399  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.845652  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.845672  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.858160  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.858179  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.858491  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.858514  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.858515  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:28.893505  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:49:28.893539  212269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:49:28.960993  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:49:28.961020  212269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:49:29.067780  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:49:29.067815  212269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:49:29.129670  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:49:29.129698  212269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:49:29.201772  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:49:29.598669  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.192034026s)
	I0414 17:49:29.598739  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:29.598752  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:29.599101  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:29.599101  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:29.599154  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:29.599177  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:29.599191  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:29.599468  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:29.599477  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:29.599505  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.044475  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341048776s)
	I0414 17:49:30.044551  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:30.044569  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:30.044858  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:30.044874  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.044884  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:30.044891  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:30.045277  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:30.045289  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:30.045341  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.045355  212269 addons.go:479] Verifying addon metrics-server=true in "no-preload-721806"
	I0414 17:49:30.329870  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:31.062251  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.860435662s)
	I0414 17:49:31.062298  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:31.062312  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:31.062629  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:31.062652  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:31.062662  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:31.062670  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:31.062906  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:31.062951  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:31.062964  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:31.064362  212269 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-721806 addons enable metrics-server
	
	I0414 17:49:31.065558  212269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 17:49:29.668064  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:29.685205  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:29.685289  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:29.729725  213635 cri.go:89] found id: ""
	I0414 17:49:29.729753  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.729760  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:29.729766  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:29.729823  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:29.788536  213635 cri.go:89] found id: ""
	I0414 17:49:29.788569  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.788581  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:29.788588  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:29.788656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:29.832032  213635 cri.go:89] found id: ""
	I0414 17:49:29.832060  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.832069  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:29.832074  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:29.832123  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:29.864981  213635 cri.go:89] found id: ""
	I0414 17:49:29.865009  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.865019  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:29.865025  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:29.865091  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:29.901024  213635 cri.go:89] found id: ""
	I0414 17:49:29.901060  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.901071  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:29.901079  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:29.901149  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:29.938790  213635 cri.go:89] found id: ""
	I0414 17:49:29.938820  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.938832  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:29.938840  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:29.938912  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:29.981414  213635 cri.go:89] found id: ""
	I0414 17:49:29.981445  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.981456  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:29.981463  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:29.981526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:30.022510  213635 cri.go:89] found id: ""
	I0414 17:49:30.022545  213635 logs.go:282] 0 containers: []
	W0414 17:49:30.022558  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:30.022571  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:30.022588  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:30.077221  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:30.077255  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:30.091513  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:30.091552  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:30.164964  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:30.164991  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:30.165004  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:30.246281  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:30.246321  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:32.807018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:32.825456  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:32.825531  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:32.864079  213635 cri.go:89] found id: ""
	I0414 17:49:32.864116  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.864126  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:32.864133  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:32.864191  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:32.905763  213635 cri.go:89] found id: ""
	I0414 17:49:32.905792  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.905806  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:32.905813  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:32.905894  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:32.944126  213635 cri.go:89] found id: ""
	I0414 17:49:32.944167  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.944186  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:32.944195  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:32.944258  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:32.983511  213635 cri.go:89] found id: ""
	I0414 17:49:32.983549  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.983562  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:32.983571  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:32.983629  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:33.021383  213635 cri.go:89] found id: ""
	I0414 17:49:33.021411  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.021422  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:33.021429  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:33.021488  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:33.058181  213635 cri.go:89] found id: ""
	I0414 17:49:33.058214  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.058225  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:33.058233  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:33.058296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:33.094426  213635 cri.go:89] found id: ""
	I0414 17:49:33.094459  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.094470  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:33.094479  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:33.094537  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:33.139392  213635 cri.go:89] found id: ""
	I0414 17:49:33.139430  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.139443  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:33.139455  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:33.139471  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:33.218814  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:33.218842  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:33.218860  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:29.783892  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:32.282499  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:31.066728  212269 addons.go:514] duration metric: took 3.098264633s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 17:49:32.824809  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:35.323008  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:33.325637  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:33.325678  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:33.363443  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:33.363473  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:33.427131  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:33.427167  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:35.942712  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:35.957936  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:35.958027  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:35.998316  213635 cri.go:89] found id: ""
	I0414 17:49:35.998343  213635 logs.go:282] 0 containers: []
	W0414 17:49:35.998354  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:35.998361  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:35.998419  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:36.032107  213635 cri.go:89] found id: ""
	I0414 17:49:36.032139  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.032149  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:36.032156  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:36.032211  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:36.070010  213635 cri.go:89] found id: ""
	I0414 17:49:36.070035  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.070043  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:36.070049  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:36.070104  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:36.105914  213635 cri.go:89] found id: ""
	I0414 17:49:36.105944  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.105962  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:36.105970  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:36.106036  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:36.140378  213635 cri.go:89] found id: ""
	I0414 17:49:36.140406  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.140418  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:36.140425  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:36.140487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:36.178535  213635 cri.go:89] found id: ""
	I0414 17:49:36.178564  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.178575  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:36.178583  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:36.178652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:36.217284  213635 cri.go:89] found id: ""
	I0414 17:49:36.217314  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.217324  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:36.217330  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:36.217391  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:36.251770  213635 cri.go:89] found id: ""
	I0414 17:49:36.251805  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.251818  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:36.251835  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:36.251850  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:36.322858  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:36.322906  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:36.337902  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:36.337939  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:36.415729  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:36.415752  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:36.415767  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:36.512960  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:36.513000  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:36.827356  212269 pod_ready.go:93] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.827377  212269 pod_ready.go:82] duration metric: took 8.509888872s for pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.827386  212269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.869474  212269 pod_ready.go:93] pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.869506  212269 pod_ready.go:82] duration metric: took 42.1117ms for pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.869522  212269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.896002  212269 pod_ready.go:93] pod "etcd-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.896034  212269 pod_ready.go:82] duration metric: took 26.503053ms for pod "etcd-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.896046  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.910284  212269 pod_ready.go:93] pod "kube-apiserver-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.910332  212269 pod_ready.go:82] duration metric: took 14.277535ms for pod "kube-apiserver-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.910360  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.917658  212269 pod_ready.go:93] pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.917678  212269 pod_ready.go:82] duration metric: took 7.305319ms for pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.917689  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tktgt" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.227025  212269 pod_ready.go:93] pod "kube-proxy-tktgt" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:37.227047  212269 pod_ready.go:82] duration metric: took 309.350302ms for pod "kube-proxy-tktgt" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.227056  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.621871  212269 pod_ready.go:93] pod "kube-scheduler-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:37.621901  212269 pod_ready.go:82] duration metric: took 394.836681ms for pod "kube-scheduler-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.621909  212269 pod_ready.go:39] duration metric: took 9.310525251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:37.621924  212269 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:49:37.621974  212269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:37.660143  212269 api_server.go:72] duration metric: took 9.691771257s to wait for apiserver process to appear ...
	I0414 17:49:37.660171  212269 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:49:37.660193  212269 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I0414 17:49:37.665313  212269 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I0414 17:49:37.666371  212269 api_server.go:141] control plane version: v1.32.2
	I0414 17:49:37.666390  212269 api_server.go:131] duration metric: took 6.212109ms to wait for apiserver health ...
	I0414 17:49:37.666397  212269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:49:37.823477  212269 system_pods.go:59] 9 kube-system pods found
	I0414 17:49:37.823504  212269 system_pods.go:61] "coredns-668d6bf9bc-6cjwn" [3fb5680f-8bc6-4d35-abbf-19108c2242d3] Running
	I0414 17:49:37.823509  212269 system_pods.go:61] "coredns-668d6bf9bc-bng87" [0ae7cd1a-9760-43aa-b0ac-9f66c7e505d2] Running
	I0414 17:49:37.823513  212269 system_pods.go:61] "etcd-no-preload-721806" [6f30ffea-8f3a-4e21-9fd6-c9366bb997e2] Running
	I0414 17:49:37.823516  212269 system_pods.go:61] "kube-apiserver-no-preload-721806" [bc7d4172-ee21-4d53-a4a6-9bb7272d8b24] Running
	I0414 17:49:37.823521  212269 system_pods.go:61] "kube-controller-manager-no-preload-721806" [346266a0-a376-466c-9ebb-46772557740b] Running
	I0414 17:49:37.823525  212269 system_pods.go:61] "kube-proxy-tktgt" [984a1b9b-3c51-45d0-86bd-3ca64d1b3af8] Running
	I0414 17:49:37.823529  212269 system_pods.go:61] "kube-scheduler-no-preload-721806" [2294ad27-ffc4-4181-9bef-f865956252ac] Running
	I0414 17:49:37.823537  212269 system_pods.go:61] "metrics-server-f79f97bbb-f99gx" [c2d0b638-6f0e-41d7-b4e3-4e0f5a619c86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:37.823547  212269 system_pods.go:61] "storage-provisioner" [463e19f1-b7aa-46ff-b5c7-99e1207bff9e] Running
	I0414 17:49:37.823561  212269 system_pods.go:74] duration metric: took 157.157807ms to wait for pod list to return data ...
	I0414 17:49:37.823571  212269 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:49:38.021598  212269 default_sa.go:45] found service account: "default"
	I0414 17:49:38.021626  212269 default_sa.go:55] duration metric: took 198.045961ms for default service account to be created ...
	I0414 17:49:38.021642  212269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:49:38.222171  212269 system_pods.go:86] 9 kube-system pods found
	I0414 17:49:38.222205  212269 system_pods.go:89] "coredns-668d6bf9bc-6cjwn" [3fb5680f-8bc6-4d35-abbf-19108c2242d3] Running
	I0414 17:49:38.222210  212269 system_pods.go:89] "coredns-668d6bf9bc-bng87" [0ae7cd1a-9760-43aa-b0ac-9f66c7e505d2] Running
	I0414 17:49:38.222214  212269 system_pods.go:89] "etcd-no-preload-721806" [6f30ffea-8f3a-4e21-9fd6-c9366bb997e2] Running
	I0414 17:49:38.222217  212269 system_pods.go:89] "kube-apiserver-no-preload-721806" [bc7d4172-ee21-4d53-a4a6-9bb7272d8b24] Running
	I0414 17:49:38.222220  212269 system_pods.go:89] "kube-controller-manager-no-preload-721806" [346266a0-a376-466c-9ebb-46772557740b] Running
	I0414 17:49:38.222224  212269 system_pods.go:89] "kube-proxy-tktgt" [984a1b9b-3c51-45d0-86bd-3ca64d1b3af8] Running
	I0414 17:49:38.222228  212269 system_pods.go:89] "kube-scheduler-no-preload-721806" [2294ad27-ffc4-4181-9bef-f865956252ac] Running
	I0414 17:49:38.222233  212269 system_pods.go:89] "metrics-server-f79f97bbb-f99gx" [c2d0b638-6f0e-41d7-b4e3-4e0f5a619c86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:38.222237  212269 system_pods.go:89] "storage-provisioner" [463e19f1-b7aa-46ff-b5c7-99e1207bff9e] Running
	I0414 17:49:38.222247  212269 system_pods.go:126] duration metric: took 200.597392ms to wait for k8s-apps to be running ...
	I0414 17:49:38.222257  212269 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:49:38.222316  212269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:38.258014  212269 system_svc.go:56] duration metric: took 35.747059ms WaitForService to wait for kubelet
	I0414 17:49:38.258046  212269 kubeadm.go:582] duration metric: took 10.289680192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:49:38.258069  212269 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:49:38.422770  212269 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:49:38.422805  212269 node_conditions.go:123] node cpu capacity is 2
	I0414 17:49:38.422833  212269 node_conditions.go:105] duration metric: took 164.757743ms to run NodePressure ...
	I0414 17:49:38.422848  212269 start.go:241] waiting for startup goroutines ...
	I0414 17:49:38.422858  212269 start.go:246] waiting for cluster config update ...
	I0414 17:49:38.422873  212269 start.go:255] writing updated cluster config ...
	I0414 17:49:38.423253  212269 ssh_runner.go:195] Run: rm -f paused
	I0414 17:49:38.493521  212269 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:49:38.495382  212269 out.go:177] * Done! kubectl is now configured to use "no-preload-721806" cluster and "default" namespace by default
	I0414 17:49:34.781757  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:36.781990  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:39.053905  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:39.068768  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:39.068841  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:39.104418  213635 cri.go:89] found id: ""
	I0414 17:49:39.104446  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.104454  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:39.104460  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:39.104520  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:39.144556  213635 cri.go:89] found id: ""
	I0414 17:49:39.144587  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.144598  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:39.144605  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:39.144673  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:39.184890  213635 cri.go:89] found id: ""
	I0414 17:49:39.184923  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.184936  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:39.184946  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:39.185018  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:39.224321  213635 cri.go:89] found id: ""
	I0414 17:49:39.224353  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.224364  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:39.224372  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:39.224431  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:39.275363  213635 cri.go:89] found id: ""
	I0414 17:49:39.275393  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.275403  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:39.275411  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:39.275469  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:39.324682  213635 cri.go:89] found id: ""
	I0414 17:49:39.324715  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.324725  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:39.324733  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:39.324788  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:39.356862  213635 cri.go:89] found id: ""
	I0414 17:49:39.356891  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.356901  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:39.356908  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:39.356970  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:39.392157  213635 cri.go:89] found id: ""
	I0414 17:49:39.392186  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.392197  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:39.392208  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:39.392223  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:39.484945  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:39.484971  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:39.484989  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:39.564891  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:39.564927  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:39.608513  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:39.608543  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:39.672726  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:39.672760  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:42.189948  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:42.203489  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:42.203560  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:42.243021  213635 cri.go:89] found id: ""
	I0414 17:49:42.243047  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.243057  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:42.243064  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:42.243152  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:42.285782  213635 cri.go:89] found id: ""
	I0414 17:49:42.285807  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.285817  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:42.285824  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:42.285898  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:42.318326  213635 cri.go:89] found id: ""
	I0414 17:49:42.318350  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.318360  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:42.318367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:42.318421  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:42.351765  213635 cri.go:89] found id: ""
	I0414 17:49:42.351788  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.351795  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:42.351802  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:42.351862  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:42.382539  213635 cri.go:89] found id: ""
	I0414 17:49:42.382564  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.382574  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:42.382582  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:42.382639  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:42.416009  213635 cri.go:89] found id: ""
	I0414 17:49:42.416034  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.416044  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:42.416051  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:42.416107  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:42.447820  213635 cri.go:89] found id: ""
	I0414 17:49:42.447860  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.447871  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:42.447879  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:42.447941  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:42.486157  213635 cri.go:89] found id: ""
	I0414 17:49:42.486179  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.486186  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:42.486195  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:42.486210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:42.556937  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:42.556963  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:42.556980  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:42.636537  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:42.636569  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:42.676688  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:42.676717  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:42.728391  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:42.728421  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:38.783981  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:41.281841  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:43.282020  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:45.242452  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:45.256486  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:45.256558  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:45.291454  213635 cri.go:89] found id: ""
	I0414 17:49:45.291482  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.291490  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:45.291497  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:45.291552  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:45.328550  213635 cri.go:89] found id: ""
	I0414 17:49:45.328573  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.328583  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:45.328591  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:45.328638  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:45.365121  213635 cri.go:89] found id: ""
	I0414 17:49:45.365148  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.365155  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:45.365161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:45.365216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:45.402479  213635 cri.go:89] found id: ""
	I0414 17:49:45.402508  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.402519  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:45.402527  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:45.402580  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:45.433123  213635 cri.go:89] found id: ""
	I0414 17:49:45.433147  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.433155  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:45.433160  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:45.433206  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:45.466351  213635 cri.go:89] found id: ""
	I0414 17:49:45.466376  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.466383  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:45.466390  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:45.466442  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:45.498745  213635 cri.go:89] found id: ""
	I0414 17:49:45.498774  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.498785  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:45.498792  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:45.498866  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:45.531870  213635 cri.go:89] found id: ""
	I0414 17:49:45.531898  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.531908  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:45.531919  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:45.531937  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:45.582230  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:45.582257  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:45.597164  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:45.597197  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:45.666569  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:45.666598  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:45.666616  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:45.746036  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:45.746068  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:45.782620  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:48.280928  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:48.284590  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:48.297947  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:48.298019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:48.331443  213635 cri.go:89] found id: ""
	I0414 17:49:48.331469  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.331480  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:48.331487  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:48.331534  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:48.364569  213635 cri.go:89] found id: ""
	I0414 17:49:48.364602  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.364613  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:48.364620  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:48.364683  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:48.398063  213635 cri.go:89] found id: ""
	I0414 17:49:48.398097  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.398109  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:48.398118  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:48.398182  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:48.430783  213635 cri.go:89] found id: ""
	I0414 17:49:48.430808  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.430829  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:48.430837  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:48.430924  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:48.466378  213635 cri.go:89] found id: ""
	I0414 17:49:48.466410  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.466423  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:48.466432  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:48.466656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:48.499766  213635 cri.go:89] found id: ""
	I0414 17:49:48.499819  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.499829  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:48.499837  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:48.499901  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:48.533192  213635 cri.go:89] found id: ""
	I0414 17:49:48.533218  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.533228  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:48.533235  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:48.533294  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:48.565138  213635 cri.go:89] found id: ""
	I0414 17:49:48.565159  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.565167  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:48.565174  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:48.565183  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:48.616578  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:48.616609  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:48.630209  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:48.630232  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:48.697158  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:48.697184  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:48.697196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:48.777141  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:48.777177  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:51.322807  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:51.336971  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:51.337037  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:51.373592  213635 cri.go:89] found id: ""
	I0414 17:49:51.373616  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.373623  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:51.373628  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:51.373675  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:51.410753  213635 cri.go:89] found id: ""
	I0414 17:49:51.410782  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.410791  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:51.410796  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:51.410846  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:51.443612  213635 cri.go:89] found id: ""
	I0414 17:49:51.443639  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.443650  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:51.443656  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:51.443717  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:51.476956  213635 cri.go:89] found id: ""
	I0414 17:49:51.476982  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.476990  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:51.476995  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:51.477041  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:51.512295  213635 cri.go:89] found id: ""
	I0414 17:49:51.512330  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.512349  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:51.512357  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:51.512420  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:51.553410  213635 cri.go:89] found id: ""
	I0414 17:49:51.553437  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.553445  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:51.553451  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:51.553514  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:51.593165  213635 cri.go:89] found id: ""
	I0414 17:49:51.593196  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.593205  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:51.593210  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:51.593259  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:51.634382  213635 cri.go:89] found id: ""
	I0414 17:49:51.634425  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.634436  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:51.634446  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:51.634457  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:51.687688  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:51.687725  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:51.703569  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:51.703600  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:51.775371  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:51.775398  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:51.775414  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:51.851890  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:51.851936  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:50.282042  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:52.782200  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:54.389539  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:54.403233  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:54.403293  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:54.447655  213635 cri.go:89] found id: ""
	I0414 17:49:54.447675  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.447683  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:54.447690  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:54.447736  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:54.486882  213635 cri.go:89] found id: ""
	I0414 17:49:54.486905  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.486912  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:54.486917  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:54.486977  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:54.519544  213635 cri.go:89] found id: ""
	I0414 17:49:54.519570  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.519581  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:54.519588  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:54.519643  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:54.558646  213635 cri.go:89] found id: ""
	I0414 17:49:54.558671  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.558681  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:54.558689  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:54.558735  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:54.600650  213635 cri.go:89] found id: ""
	I0414 17:49:54.600674  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.600680  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:54.600685  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:54.600732  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:54.641206  213635 cri.go:89] found id: ""
	I0414 17:49:54.641231  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.641240  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:54.641247  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:54.641302  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:54.680671  213635 cri.go:89] found id: ""
	I0414 17:49:54.680698  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.680708  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:54.680715  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:54.680765  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:54.721028  213635 cri.go:89] found id: ""
	I0414 17:49:54.721050  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.721056  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:54.721066  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:54.721076  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:54.769755  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:54.769782  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:54.785252  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:54.785273  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:54.855288  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:54.855308  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:54.855322  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:54.952695  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:54.952735  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:57.499933  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:57.514593  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:57.514658  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:57.549526  213635 cri.go:89] found id: ""
	I0414 17:49:57.549550  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.549558  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:57.549564  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:57.549610  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:57.582596  213635 cri.go:89] found id: ""
	I0414 17:49:57.582626  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.582637  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:57.582643  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:57.582695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:57.622214  213635 cri.go:89] found id: ""
	I0414 17:49:57.622244  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.622252  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:57.622257  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:57.622313  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:57.655388  213635 cri.go:89] found id: ""
	I0414 17:49:57.655415  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.655422  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:57.655428  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:57.655474  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:57.692324  213635 cri.go:89] found id: ""
	I0414 17:49:57.692349  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.692357  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:57.692362  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:57.692407  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:57.725614  213635 cri.go:89] found id: ""
	I0414 17:49:57.725637  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.725644  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:57.725650  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:57.725700  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:57.757747  213635 cri.go:89] found id: ""
	I0414 17:49:57.757779  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.757788  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:57.757794  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:57.757868  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:57.791614  213635 cri.go:89] found id: ""
	I0414 17:49:57.791651  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.791658  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:57.791666  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:57.791676  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:57.839950  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:57.839983  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:57.852850  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:57.852877  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:57.925310  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:57.925338  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:57.925355  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:58.008445  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:58.008484  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:54.783081  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:57.282711  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:00.550402  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:00.564239  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:00.564296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:00.598410  213635 cri.go:89] found id: ""
	I0414 17:50:00.598439  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.598447  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:00.598452  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:00.598500  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:00.629470  213635 cri.go:89] found id: ""
	I0414 17:50:00.629489  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.629497  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:00.629502  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:00.629547  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:00.660663  213635 cri.go:89] found id: ""
	I0414 17:50:00.660686  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.660695  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:00.660703  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:00.660780  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:00.703422  213635 cri.go:89] found id: ""
	I0414 17:50:00.703450  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.703461  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:00.703467  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:00.703524  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:00.736355  213635 cri.go:89] found id: ""
	I0414 17:50:00.736378  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.736388  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:00.736394  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:00.736447  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:00.771432  213635 cri.go:89] found id: ""
	I0414 17:50:00.771460  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.771470  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:00.771478  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:00.771544  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:00.804453  213635 cri.go:89] found id: ""
	I0414 17:50:00.804474  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.804483  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:00.804490  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:00.804550  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:00.840934  213635 cri.go:89] found id: ""
	I0414 17:50:00.840962  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.840971  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:00.840982  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:00.840994  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:00.888813  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:00.888846  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:00.901168  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:00.901188  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:00.970608  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:00.970638  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:00.970655  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:01.054190  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:01.054225  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:59.781167  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:01.783383  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:03.592930  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:03.607476  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:03.607542  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:03.647536  213635 cri.go:89] found id: ""
	I0414 17:50:03.647559  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.647567  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:03.647572  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:03.647616  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:03.687053  213635 cri.go:89] found id: ""
	I0414 17:50:03.687078  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.687086  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:03.687092  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:03.687135  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:03.724232  213635 cri.go:89] found id: ""
	I0414 17:50:03.724258  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.724268  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:03.724276  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:03.724327  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:03.758621  213635 cri.go:89] found id: ""
	I0414 17:50:03.758650  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.758661  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:03.758668  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:03.758735  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:03.792524  213635 cri.go:89] found id: ""
	I0414 17:50:03.792553  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.792563  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:03.792570  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:03.792623  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:03.823533  213635 cri.go:89] found id: ""
	I0414 17:50:03.823562  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.823569  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:03.823575  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:03.823619  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:03.855038  213635 cri.go:89] found id: ""
	I0414 17:50:03.855060  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.855067  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:03.855072  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:03.855122  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:03.886260  213635 cri.go:89] found id: ""
	I0414 17:50:03.886288  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.886296  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:03.886304  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:03.886314  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:03.935750  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:03.935780  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:03.948571  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:03.948599  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:04.016600  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:04.016625  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:04.016641  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:04.095247  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:04.095278  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:06.633583  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:06.647292  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:06.647371  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:06.680994  213635 cri.go:89] found id: ""
	I0414 17:50:06.681023  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.681031  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:06.681036  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:06.681093  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:06.715235  213635 cri.go:89] found id: ""
	I0414 17:50:06.715262  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.715269  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:06.715275  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:06.715333  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:06.750320  213635 cri.go:89] found id: ""
	I0414 17:50:06.750349  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.750359  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:06.750367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:06.750425  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:06.781634  213635 cri.go:89] found id: ""
	I0414 17:50:06.781657  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.781666  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:06.781673  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:06.781731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:06.812684  213635 cri.go:89] found id: ""
	I0414 17:50:06.812709  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.812719  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:06.812727  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:06.812785  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:06.843417  213635 cri.go:89] found id: ""
	I0414 17:50:06.843447  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.843458  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:06.843466  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:06.843519  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:06.878915  213635 cri.go:89] found id: ""
	I0414 17:50:06.878943  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.878952  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:06.878958  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:06.879018  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:06.911647  213635 cri.go:89] found id: ""
	I0414 17:50:06.911670  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.911680  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:06.911705  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:06.911720  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:06.977253  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:06.977286  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:06.977304  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:07.056442  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:07.056475  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:07.104053  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:07.104082  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:07.153444  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:07.153483  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:04.281983  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:04.776666  213406 pod_ready.go:82] duration metric: took 4m0.000384507s for pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace to be "Ready" ...
	E0414 17:50:04.776701  213406 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0414 17:50:04.776719  213406 pod_ready.go:39] duration metric: took 4m12.533820908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:04.776753  213406 kubeadm.go:597] duration metric: took 4m20.355244776s to restartPrimaryControlPlane
	W0414 17:50:04.776834  213406 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:50:04.776879  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:50:09.667392  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:09.680695  213635 kubeadm.go:597] duration metric: took 4m3.288338716s to restartPrimaryControlPlane
	W0414 17:50:09.680757  213635 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:50:09.680787  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:50:15.123013  213635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.442204913s)
	I0414 17:50:15.123098  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:15.137541  213635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:50:15.147676  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:50:15.157224  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:50:15.157238  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:50:15.157273  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:50:15.166484  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:50:15.166525  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:50:15.175831  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:50:15.184692  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:50:15.184731  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:50:15.193871  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:50:15.202947  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:50:15.202993  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:50:15.212451  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:50:15.221477  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:50:15.221512  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:50:15.231277  213635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:50:15.294259  213635 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:50:15.294330  213635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:50:15.422321  213635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:50:15.422476  213635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:50:15.422622  213635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:50:15.596146  213635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:50:15.598667  213635 out.go:235]   - Generating certificates and keys ...
	I0414 17:50:15.598769  213635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:50:15.598859  213635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:50:15.598976  213635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:50:15.599034  213635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:50:15.599148  213635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:50:15.599238  213635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:50:15.599301  213635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:50:15.599353  213635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:50:15.599416  213635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:50:15.599514  213635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:50:15.599573  213635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:50:15.599654  213635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:50:15.664653  213635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:50:15.743669  213635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:50:15.813965  213635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:50:16.089174  213635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:50:16.103702  213635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:50:16.104792  213635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:50:16.104884  213635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:50:16.250169  213635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:50:16.252518  213635 out.go:235]   - Booting up control plane ...
	I0414 17:50:16.252640  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:50:16.262331  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:50:16.263648  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:50:16.264988  213635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:50:16.267648  213635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:50:32.538099  213406 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.761187529s)
	I0414 17:50:32.538165  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:32.553667  213406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:50:32.563284  213406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:50:32.572633  213406 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:50:32.572650  213406 kubeadm.go:157] found existing configuration files:
	
	I0414 17:50:32.572699  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:50:32.581936  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:50:32.581989  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:50:32.592144  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:50:32.600756  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:50:32.600806  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:50:32.610243  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:50:32.619999  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:50:32.620046  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:50:32.629791  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:50:32.639153  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:50:32.639192  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:50:32.648625  213406 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:50:32.799107  213406 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:50:40.718968  213406 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:50:40.719047  213406 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:50:40.719195  213406 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:50:40.719284  213406 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:50:40.719402  213406 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:50:40.719495  213406 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:50:40.720874  213406 out.go:235]   - Generating certificates and keys ...
	I0414 17:50:40.720969  213406 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:50:40.721050  213406 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:50:40.721133  213406 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:50:40.721193  213406 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:50:40.721253  213406 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:50:40.721300  213406 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:50:40.721375  213406 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:50:40.721457  213406 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:50:40.721523  213406 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:50:40.721588  213406 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:50:40.721623  213406 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:50:40.721690  213406 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:50:40.721773  213406 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:50:40.721867  213406 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:50:40.721954  213406 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:50:40.722064  213406 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:50:40.722157  213406 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:50:40.722264  213406 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:50:40.722356  213406 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:50:40.724310  213406 out.go:235]   - Booting up control plane ...
	I0414 17:50:40.724425  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:50:40.724523  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:50:40.724621  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:50:40.724763  213406 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:50:40.724890  213406 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:50:40.724962  213406 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:50:40.725139  213406 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:50:40.725268  213406 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:50:40.725360  213406 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000971318s
	I0414 17:50:40.725463  213406 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:50:40.725555  213406 kubeadm.go:310] [api-check] The API server is healthy after 4.502714129s
	I0414 17:50:40.725689  213406 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:50:40.725884  213406 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:50:40.725975  213406 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:50:40.726178  213406 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-418468 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:50:40.726245  213406 kubeadm.go:310] [bootstrap-token] Using token: 2kykq2.rhxxbbskj81go9zq
	I0414 17:50:40.727271  213406 out.go:235]   - Configuring RBAC rules ...
	I0414 17:50:40.727362  213406 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:50:40.727452  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:50:40.727612  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:50:40.727733  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:50:40.727879  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:50:40.728009  213406 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:50:40.728182  213406 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:50:40.728252  213406 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:50:40.728308  213406 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:50:40.728315  213406 kubeadm.go:310] 
	I0414 17:50:40.728365  213406 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:50:40.728374  213406 kubeadm.go:310] 
	I0414 17:50:40.728444  213406 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:50:40.728450  213406 kubeadm.go:310] 
	I0414 17:50:40.728487  213406 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:50:40.728568  213406 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:50:40.728654  213406 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:50:40.728663  213406 kubeadm.go:310] 
	I0414 17:50:40.728744  213406 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:50:40.728753  213406 kubeadm.go:310] 
	I0414 17:50:40.728829  213406 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:50:40.728841  213406 kubeadm.go:310] 
	I0414 17:50:40.728888  213406 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:50:40.728953  213406 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:50:40.729011  213406 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:50:40.729017  213406 kubeadm.go:310] 
	I0414 17:50:40.729090  213406 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:50:40.729163  213406 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:50:40.729169  213406 kubeadm.go:310] 
	I0414 17:50:40.729277  213406 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2kykq2.rhxxbbskj81go9zq \
	I0414 17:50:40.729434  213406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:50:40.729480  213406 kubeadm.go:310] 	--control-plane 
	I0414 17:50:40.729489  213406 kubeadm.go:310] 
	I0414 17:50:40.729585  213406 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:50:40.729599  213406 kubeadm.go:310] 
	I0414 17:50:40.729712  213406 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2kykq2.rhxxbbskj81go9zq \
	I0414 17:50:40.729880  213406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:50:40.729894  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:50:40.729902  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:50:40.731470  213406 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:50:40.732385  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:50:40.744504  213406 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:50:40.762319  213406 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:50:40.762424  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:40.762443  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-418468 minikube.k8s.io/updated_at=2025_04_14T17_50_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=embed-certs-418468 minikube.k8s.io/primary=true
	I0414 17:50:40.994576  213406 ops.go:34] apiserver oom_adj: -16
	I0414 17:50:40.994598  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:41.495583  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:41.995608  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:42.494670  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:42.995490  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:43.494862  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:43.995730  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:44.495428  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:44.592036  213406 kubeadm.go:1113] duration metric: took 3.829658673s to wait for elevateKubeSystemPrivileges
	I0414 17:50:44.592070  213406 kubeadm.go:394] duration metric: took 5m0.228669417s to StartCluster
	I0414 17:50:44.592092  213406 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:50:44.592185  213406 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:50:44.593289  213406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:50:44.593514  213406 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:50:44.593648  213406 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:50:44.593726  213406 config.go:182] Loaded profile config "embed-certs-418468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:50:44.593753  213406 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-418468"
	I0414 17:50:44.593775  213406 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-418468"
	W0414 17:50:44.593788  213406 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:50:44.593788  213406 addons.go:69] Setting dashboard=true in profile "embed-certs-418468"
	I0414 17:50:44.593793  213406 addons.go:69] Setting metrics-server=true in profile "embed-certs-418468"
	I0414 17:50:44.593809  213406 addons.go:238] Setting addon dashboard=true in "embed-certs-418468"
	I0414 17:50:44.593818  213406 addons.go:238] Setting addon metrics-server=true in "embed-certs-418468"
	W0414 17:50:44.593840  213406 addons.go:247] addon metrics-server should already be in state true
	I0414 17:50:44.593774  213406 addons.go:69] Setting default-storageclass=true in profile "embed-certs-418468"
	I0414 17:50:44.593872  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.593881  213406 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-418468"
	W0414 17:50:44.593819  213406 addons.go:247] addon dashboard should already be in state true
	I0414 17:50:44.593841  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.593949  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.594259  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594294  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594307  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594325  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594382  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594404  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594442  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594407  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.595088  213406 out.go:177] * Verifying Kubernetes components...
	I0414 17:50:44.596521  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:50:44.609533  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0414 17:50:44.609575  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0414 17:50:44.609610  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0414 17:50:44.610072  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610124  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610136  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610594  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610614  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610724  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610728  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610746  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610783  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610997  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611126  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611245  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611287  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.611566  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.611607  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.611855  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.611890  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.612974  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0414 17:50:44.613483  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.614431  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.614549  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.614940  213406 addons.go:238] Setting addon default-storageclass=true in "embed-certs-418468"
	W0414 17:50:44.614962  213406 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:50:44.614990  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.614950  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.615345  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.615388  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.615539  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.615584  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.626843  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0414 17:50:44.627427  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.627885  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.627905  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.628338  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.628542  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.629083  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0414 17:50:44.629405  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.629932  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.629948  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.630188  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0414 17:50:44.630331  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.630425  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.630488  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.630767  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.630792  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.630993  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.631008  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.631289  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.631482  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.632157  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0414 17:50:44.632324  213406 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:50:44.632525  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.633136  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.633159  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.633372  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.633566  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.633657  213406 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:50:44.633675  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:50:44.633693  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.633762  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.634840  213406 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:50:44.635923  213406 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:50:44.636145  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.636955  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:50:44.636970  213406 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:50:44.636984  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.637272  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.637551  213406 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:50:44.637668  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.637698  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.637892  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.638053  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.638220  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.638412  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.638614  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:50:44.638627  213406 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:50:44.638642  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.640489  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.640921  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.640999  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.641118  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.641252  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.641353  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.641461  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.641481  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.641837  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.641860  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.642029  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.642195  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.642338  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.642468  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.649470  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0414 17:50:44.649885  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.650319  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.650332  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.650688  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.650862  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.652217  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.652408  213406 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:50:44.652422  213406 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:50:44.652437  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.654995  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.655423  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.655451  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.655552  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.655680  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.655776  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.655847  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.771042  213406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:50:44.790138  213406 node_ready.go:35] waiting up to 6m0s for node "embed-certs-418468" to be "Ready" ...
	I0414 17:50:44.813392  213406 node_ready.go:49] node "embed-certs-418468" has status "Ready":"True"
	I0414 17:50:44.813417  213406 node_ready.go:38] duration metric: took 23.248396ms for node "embed-certs-418468" to be "Ready" ...
	I0414 17:50:44.813429  213406 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:44.816247  213406 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:44.901629  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:50:44.909788  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:50:44.915477  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:50:44.915498  213406 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:50:44.941111  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:50:44.941132  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:50:44.962200  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:50:44.962221  213406 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:50:45.009756  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:50:45.009781  213406 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:50:45.045994  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:50:45.046027  213406 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:50:45.110797  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:50:45.110830  213406 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:50:45.174495  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:50:45.174532  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:50:45.225055  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:50:45.260868  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:50:45.260897  213406 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:50:45.286443  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.286475  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.286795  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.286859  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.286873  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.286882  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.286824  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.287121  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.287165  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.319685  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.319702  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.320094  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.320125  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.320125  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.348341  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:50:45.348362  213406 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:50:45.425795  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:50:45.425820  213406 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:50:45.460510  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:50:45.460534  213406 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:50:45.539385  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:50:45.539413  213406 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:50:45.581338  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:50:45.899255  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.899281  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.899682  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.899757  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.899701  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.899772  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.899847  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.900112  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.900124  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.625721  213406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.400621394s)
	I0414 17:50:46.625789  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:46.625805  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:46.626108  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:46.626152  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.626167  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:46.626175  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:46.626444  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:46.626480  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:46.626495  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.626506  213406 addons.go:479] Verifying addon metrics-server=true in "embed-certs-418468"
	I0414 17:50:46.825449  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:47.825152  213406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.24373778s)
	I0414 17:50:47.825202  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:47.825214  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:47.825570  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:47.825589  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:47.825599  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:47.825606  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:47.825874  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:47.825893  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:47.827533  213406 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-418468 addons enable metrics-server
	
	I0414 17:50:47.828991  213406 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 17:50:47.830391  213406 addons.go:514] duration metric: took 3.236761674s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 17:50:49.325501  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:51.822230  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:53.821538  213406 pod_ready.go:93] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.821565  213406 pod_ready.go:82] duration metric: took 9.005299134s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.821578  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.825285  213406 pod_ready.go:93] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.825300  213406 pod_ready.go:82] duration metric: took 3.715551ms for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.825308  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.829517  213406 pod_ready.go:93] pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.829531  213406 pod_ready.go:82] duration metric: took 4.218381ms for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.829538  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.835753  213406 pod_ready.go:93] pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.835766  213406 pod_ready.go:82] duration metric: took 6.223543ms for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.835772  213406 pod_ready.go:39] duration metric: took 9.022329744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:53.835786  213406 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:50:53.835832  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:53.867607  213406 api_server.go:72] duration metric: took 9.274050694s to wait for apiserver process to appear ...
	I0414 17:50:53.867636  213406 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:50:53.867656  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:50:53.871486  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 200:
	ok
	I0414 17:50:53.872317  213406 api_server.go:141] control plane version: v1.32.2
	I0414 17:50:53.872338  213406 api_server.go:131] duration metric: took 4.691901ms to wait for apiserver health ...
	I0414 17:50:53.872344  213406 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:50:53.878405  213406 system_pods.go:59] 9 kube-system pods found
	I0414 17:50:53.878425  213406 system_pods.go:61] "coredns-668d6bf9bc-4vrqt" [0738482b-8c0d-4c89-a82f-3dd26a143603] Running
	I0414 17:50:53.878430  213406 system_pods.go:61] "coredns-668d6bf9bc-kbbbq" [24fdcd3d-22b7-4976-85f2-42754178ac49] Running
	I0414 17:50:53.878434  213406 system_pods.go:61] "etcd-embed-certs-418468" [97963194-6254-4aaf-b879-3c4000c86351] Running
	I0414 17:50:53.878437  213406 system_pods.go:61] "kube-apiserver-embed-certs-418468" [8cdb0b46-19da-4d8e-9bd0-7efaa4ef75e6] Running
	I0414 17:50:53.878441  213406 system_pods.go:61] "kube-controller-manager-embed-certs-418468" [7d26ed2b-d015-4015-b248-ccce9e76a6bb] Running
	I0414 17:50:53.878444  213406 system_pods.go:61] "kube-proxy-zqrnn" [b0b54433-bd5d-4c9b-a547-8558e3d66058] Running
	I0414 17:50:53.878447  213406 system_pods.go:61] "kube-scheduler-embed-certs-418468" [5bd1256a-1d95-4e7d-b52e-0208820937f8] Running
	I0414 17:50:53.878454  213406 system_pods.go:61] "metrics-server-f79f97bbb-8blvp" [39557b8d-be28-48b9-ab37-76c22f46341d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:50:53.878461  213406 system_pods.go:61] "storage-provisioner" [136247c3-315f-43ad-a40d-080ad60a6b45] Running
	I0414 17:50:53.878469  213406 system_pods.go:74] duration metric: took 6.120329ms to wait for pod list to return data ...
	I0414 17:50:53.878478  213406 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:50:53.880531  213406 default_sa.go:45] found service account: "default"
	I0414 17:50:53.880549  213406 default_sa.go:55] duration metric: took 2.064832ms for default service account to be created ...
	I0414 17:50:53.880558  213406 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:50:54.020249  213406 system_pods.go:86] 9 kube-system pods found
	I0414 17:50:54.020276  213406 system_pods.go:89] "coredns-668d6bf9bc-4vrqt" [0738482b-8c0d-4c89-a82f-3dd26a143603] Running
	I0414 17:50:54.020282  213406 system_pods.go:89] "coredns-668d6bf9bc-kbbbq" [24fdcd3d-22b7-4976-85f2-42754178ac49] Running
	I0414 17:50:54.020286  213406 system_pods.go:89] "etcd-embed-certs-418468" [97963194-6254-4aaf-b879-3c4000c86351] Running
	I0414 17:50:54.020290  213406 system_pods.go:89] "kube-apiserver-embed-certs-418468" [8cdb0b46-19da-4d8e-9bd0-7efaa4ef75e6] Running
	I0414 17:50:54.020295  213406 system_pods.go:89] "kube-controller-manager-embed-certs-418468" [7d26ed2b-d015-4015-b248-ccce9e76a6bb] Running
	I0414 17:50:54.020298  213406 system_pods.go:89] "kube-proxy-zqrnn" [b0b54433-bd5d-4c9b-a547-8558e3d66058] Running
	I0414 17:50:54.020301  213406 system_pods.go:89] "kube-scheduler-embed-certs-418468" [5bd1256a-1d95-4e7d-b52e-0208820937f8] Running
	I0414 17:50:54.020307  213406 system_pods.go:89] "metrics-server-f79f97bbb-8blvp" [39557b8d-be28-48b9-ab37-76c22f46341d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:50:54.020312  213406 system_pods.go:89] "storage-provisioner" [136247c3-315f-43ad-a40d-080ad60a6b45] Running
	I0414 17:50:54.020323  213406 system_pods.go:126] duration metric: took 139.758195ms to wait for k8s-apps to be running ...
	I0414 17:50:54.020333  213406 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:50:54.020383  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:54.042446  213406 system_svc.go:56] duration metric: took 22.104112ms WaitForService to wait for kubelet
	I0414 17:50:54.042479  213406 kubeadm.go:582] duration metric: took 9.448925946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:50:54.042499  213406 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:50:54.219590  213406 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:50:54.219612  213406 node_conditions.go:123] node cpu capacity is 2
	I0414 17:50:54.219623  213406 node_conditions.go:105] duration metric: took 177.119005ms to run NodePressure ...
	I0414 17:50:54.219634  213406 start.go:241] waiting for startup goroutines ...
	I0414 17:50:54.219642  213406 start.go:246] waiting for cluster config update ...
	I0414 17:50:54.219655  213406 start.go:255] writing updated cluster config ...
	I0414 17:50:54.219959  213406 ssh_runner.go:195] Run: rm -f paused
	I0414 17:50:54.282458  213406 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:50:54.284727  213406 out.go:177] * Done! kubectl is now configured to use "embed-certs-418468" cluster and "default" namespace by default
	I0414 17:50:56.269443  213635 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:50:56.270353  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:50:56.270523  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:01.271007  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:01.271253  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:11.271837  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:11.272049  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:31.273087  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:31.273315  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:11.275552  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:11.275856  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:11.275878  213635 kubeadm.go:310] 
	I0414 17:52:11.275927  213635 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:52:11.275981  213635 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:52:11.275991  213635 kubeadm.go:310] 
	I0414 17:52:11.276038  213635 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:52:11.276092  213635 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:52:11.276213  213635 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:52:11.276222  213635 kubeadm.go:310] 
	I0414 17:52:11.276375  213635 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:52:11.276431  213635 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:52:11.276482  213635 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:52:11.276502  213635 kubeadm.go:310] 
	I0414 17:52:11.276617  213635 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:52:11.276722  213635 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:52:11.276733  213635 kubeadm.go:310] 
	I0414 17:52:11.276827  213635 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:52:11.276902  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:52:11.276994  213635 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:52:11.277119  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:52:11.277137  213635 kubeadm.go:310] 
	I0414 17:52:11.277720  213635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:52:11.277871  213635 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:52:11.277974  213635 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 17:52:11.278218  213635 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 17:52:11.278258  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:52:11.738009  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:52:11.752929  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:52:11.762849  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:52:11.762865  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:52:11.762901  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:52:11.772188  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:52:11.772240  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:52:11.781466  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:52:11.790582  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:52:11.790624  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:52:11.799766  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:52:11.808443  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:52:11.808481  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:52:11.817544  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:52:11.826418  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:52:11.826464  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:52:11.835946  213635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:52:11.910031  213635 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:52:11.910113  213635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:52:12.048882  213635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:52:12.049032  213635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:52:12.049160  213635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:52:12.216124  213635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:52:12.218841  213635 out.go:235]   - Generating certificates and keys ...
	I0414 17:52:12.218938  213635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:52:12.219030  213635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:52:12.219153  213635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:52:12.219244  213635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:52:12.219342  213635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:52:12.219420  213635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:52:12.219507  213635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:52:12.219612  213635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:52:12.219690  213635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:52:12.219802  213635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:52:12.219867  213635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:52:12.219917  213635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:52:12.485118  213635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:52:12.699901  213635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:52:12.798407  213635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:52:12.941803  213635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:52:12.964937  213635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:52:12.965897  213635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:52:12.966059  213635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:52:13.109607  213635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:52:13.112109  213635 out.go:235]   - Booting up control plane ...
	I0414 17:52:13.112248  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:52:13.115664  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:52:13.117940  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:52:13.119128  213635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:52:13.123525  213635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:52:53.126895  213635 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:52:53.127019  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:53.127237  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:58.127800  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:58.127997  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:53:08.128675  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:53:08.128878  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:53:28.129416  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:53:28.129642  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:54:08.127998  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:54:08.128303  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:54:08.128326  213635 kubeadm.go:310] 
	I0414 17:54:08.128362  213635 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:54:08.128505  213635 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:54:08.128527  213635 kubeadm.go:310] 
	I0414 17:54:08.128595  213635 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:54:08.128640  213635 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:54:08.128791  213635 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:54:08.128814  213635 kubeadm.go:310] 
	I0414 17:54:08.128946  213635 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:54:08.128997  213635 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:54:08.129043  213635 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:54:08.129052  213635 kubeadm.go:310] 
	I0414 17:54:08.129167  213635 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:54:08.129296  213635 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:54:08.129314  213635 kubeadm.go:310] 
	I0414 17:54:08.129479  213635 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:54:08.129615  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:54:08.129706  213635 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:54:08.129814  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:54:08.129824  213635 kubeadm.go:310] 
	I0414 17:54:08.130345  213635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:54:08.130443  213635 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:54:08.130555  213635 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 17:54:08.130646  213635 kubeadm.go:394] duration metric: took 8m1.792756267s to StartCluster
	I0414 17:54:08.130721  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:54:08.130802  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:54:08.175207  213635 cri.go:89] found id: ""
	I0414 17:54:08.175243  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.175251  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:54:08.175257  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:54:08.175311  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:54:08.209345  213635 cri.go:89] found id: ""
	I0414 17:54:08.209370  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.209377  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:54:08.209382  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:54:08.209428  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:54:08.244901  213635 cri.go:89] found id: ""
	I0414 17:54:08.244937  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.244946  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:54:08.244952  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:54:08.245022  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:54:08.279974  213635 cri.go:89] found id: ""
	I0414 17:54:08.279999  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.280006  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:54:08.280011  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:54:08.280065  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:54:08.312666  213635 cri.go:89] found id: ""
	I0414 17:54:08.312691  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.312701  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:54:08.312708  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:54:08.312761  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:54:08.345579  213635 cri.go:89] found id: ""
	I0414 17:54:08.345609  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.345619  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:54:08.345627  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:54:08.345682  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:54:08.377810  213635 cri.go:89] found id: ""
	I0414 17:54:08.377844  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.377853  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:54:08.377858  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:54:08.377900  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:54:08.409648  213635 cri.go:89] found id: ""
	I0414 17:54:08.409673  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.409681  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:54:08.409697  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:54:08.409708  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:54:08.422905  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:54:08.422930  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:54:08.495193  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:54:08.495217  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:54:08.495232  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:54:08.603072  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:54:08.603108  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:54:08.640028  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:54:08.640058  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0414 17:54:08.690480  213635 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 17:54:08.690537  213635 out.go:270] * 
	W0414 17:54:08.690590  213635 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:54:08.690605  213635 out.go:270] * 
	W0414 17:54:08.691392  213635 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 17:54:08.694565  213635 out.go:201] 
	W0414 17:54:08.695675  213635 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:54:08.695709  213635 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 17:54:08.695724  213635 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 17:54:08.697684  213635 out.go:201] 
	
	
	==> CRI-O <==
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.339292222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744653792339263168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70bbd8c7-d482-435e-ab61-eea5ed640b56 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.339755622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=149d5c07-a300-4106-a6c0-9dd5881323e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.339875659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=149d5c07-a300-4106-a6c0-9dd5881323e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.339912905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=149d5c07-a300-4106-a6c0-9dd5881323e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.371583732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73d17761-eb27-4378-aa86-3c56a28e26a1 name=/runtime.v1.RuntimeService/Version
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.371678379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73d17761-eb27-4378-aa86-3c56a28e26a1 name=/runtime.v1.RuntimeService/Version
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.372777675Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02915fa0-affe-4a65-8dfd-798d092ef834 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.373225334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744653792373203527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02915fa0-affe-4a65-8dfd-798d092ef834 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.373670312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7f278a3-066d-401a-bb06-e688b293e891 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.373746543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7f278a3-066d-401a-bb06-e688b293e891 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.373780020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f7f278a3-066d-401a-bb06-e688b293e891 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.401421383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e62403ed-a964-4a98-ba2d-2ac7fff2950b name=/runtime.v1.RuntimeService/Version
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.401498944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e62403ed-a964-4a98-ba2d-2ac7fff2950b name=/runtime.v1.RuntimeService/Version
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.402353818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edafd4cb-1cbe-457f-8bd3-7bb1166ece93 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.402780215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744653792402759689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edafd4cb-1cbe-457f-8bd3-7bb1166ece93 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.403398406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=455f9ee2-2c21-480b-a1a1-b480e1e28219 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.403474332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=455f9ee2-2c21-480b-a1a1-b480e1e28219 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.403510604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=455f9ee2-2c21-480b-a1a1-b480e1e28219 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.434637570Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e9c780b-4b15-4ece-b3b4-156161cda79f name=/runtime.v1.RuntimeService/Version
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.434712003Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e9c780b-4b15-4ece-b3b4-156161cda79f name=/runtime.v1.RuntimeService/Version
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.436337169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24e0528e-3f53-40fa-a69c-fa9396704232 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.436730806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744653792436702713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24e0528e-3f53-40fa-a69c-fa9396704232 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.437357236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c045f33-c1fb-42b9-bf35-ba9a8ce3b7bc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.437433591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c045f33-c1fb-42b9-bf35-ba9a8ce3b7bc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:03:12 old-k8s-version-768580 crio[629]: time="2025-04-14 18:03:12.437467004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c045f33-c1fb-42b9-bf35-ba9a8ce3b7bc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 17:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055960] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049332] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.224319] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.838807] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.420171] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.914151] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.065125] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060469] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.182225] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.143184] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.256654] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[Apr14 17:46] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.073476] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.861304] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[ +14.344832] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 17:50] systemd-fstab-generator[5080]: Ignoring "noauto" option for root device
	[Apr14 17:52] systemd-fstab-generator[5361]: Ignoring "noauto" option for root device
	[  +0.059704] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:03:12 up 17 min,  0 users,  load average: 0.15, 0.08, 0.08
	Linux old-k8s-version-768580 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000a19320, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0006fd950, 0x24, 0x0, ...)
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]: net.(*Dialer).DialContext(0xc0001a7380, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0006fd950, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0000d3e80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0006fd950, 0x24, 0x60, 0x7ff3a8084578, 0x118, ...)
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]: net/http.(*Transport).dial(0xc000610280, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0006fd950, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]: net/http.(*Transport).dialConn(0xc000610280, 0x4f7fe00, 0xc000120018, 0x0, 0xc000bd0480, 0x5, 0xc0006fd950, 0x24, 0x0, 0xc000754a20, ...)
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]: net/http.(*Transport).dialConnFor(0xc000610280, 0xc000b65760)
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]: created by net/http.(*Transport).queueForDial
	Apr 14 18:03:08 old-k8s-version-768580 kubelet[6546]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 14 18:03:08 old-k8s-version-768580 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 18:03:08 old-k8s-version-768580 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 18:03:09 old-k8s-version-768580 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 14 18:03:09 old-k8s-version-768580 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 18:03:09 old-k8s-version-768580 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 18:03:09 old-k8s-version-768580 kubelet[6555]: I0414 18:03:09.551128    6555 server.go:416] Version: v1.20.0
	Apr 14 18:03:09 old-k8s-version-768580 kubelet[6555]: I0414 18:03:09.551332    6555 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 18:03:09 old-k8s-version-768580 kubelet[6555]: I0414 18:03:09.553029    6555 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 18:03:09 old-k8s-version-768580 kubelet[6555]: W0414 18:03:09.553696    6555 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 14 18:03:09 old-k8s-version-768580 kubelet[6555]: I0414 18:03:09.554104    6555 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 2 (220.792305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-768580" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (362.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:03:31.090346  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:03:45.219595  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:04:08.605491  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:04:42.840692  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:06:13.897788  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:06:29.218755  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/no-preload-721806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:06:36.446784  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/default-k8s-diff-port-061428/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:06:45.782037  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:07:08.946619  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:07:23.149981  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:07:52.062955  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 18:07:52.282573  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/no-preload-721806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:07:59.513053  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/default-k8s-diff-port-061428/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:08:31.089528  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:08:45.219856  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
E0414 18:09:08.605261  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.58:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.58:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 2 (222.111716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-768580" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-768580 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-768580 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.666µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-768580 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 2 (214.134495ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-768580 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-768580 logs -n 25: (1.054545309s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-061428       | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-418468            | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC | 14 Apr 25 17:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-768580        | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-418468                 | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-768580                              | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-768580             | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC | 14 Apr 25 17:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-768580                              | old-k8s-version-768580       | jenkins | v1.35.0 | 14 Apr 25 17:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-061428                           | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-061428 | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | default-k8s-diff-port-061428                           |                              |         |         |                     |                     |
	| image   | no-preload-721806 image list                           | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	| delete  | -p no-preload-721806                                   | no-preload-721806            | jenkins | v1.35.0 | 14 Apr 25 17:49 UTC | 14 Apr 25 17:49 UTC |
	| image   | embed-certs-418468 image list                          | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	| delete  | -p embed-certs-418468                                  | embed-certs-418468           | jenkins | v1.35.0 | 14 Apr 25 17:51 UTC | 14 Apr 25 17:51 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 17:45:23
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 17:45:23.282546  213635 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:45:23.282636  213635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:23.282647  213635 out.go:358] Setting ErrFile to fd 2...
	I0414 17:45:23.282663  213635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:45:23.282871  213635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:45:23.283429  213635 out.go:352] Setting JSON to false
	I0414 17:45:23.284348  213635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8821,"bootTime":1744643902,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:45:23.284402  213635 start.go:139] virtualization: kvm guest
	I0414 17:45:23.286322  213635 out.go:177] * [old-k8s-version-768580] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:45:23.287426  213635 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:45:23.287431  213635 notify.go:220] Checking for updates...
	I0414 17:45:23.289881  213635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:45:23.291059  213635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:45:23.292002  213635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:45:23.293350  213635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:45:23.294814  213635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:45:23.296431  213635 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:45:23.296945  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:23.296998  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:23.313119  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0414 17:45:23.313580  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:23.314124  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:23.314148  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:23.314493  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:23.314664  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:23.316572  213635 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 17:45:23.317553  213635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:45:23.317841  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:23.317876  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:23.333791  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0414 17:45:23.334298  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:23.334832  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:23.334859  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:23.335206  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:23.335410  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:23.372523  213635 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 17:45:23.373766  213635 start.go:297] selected driver: kvm2
	I0414 17:45:23.373785  213635 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:23.373971  213635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:45:23.374697  213635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:45:23.374756  213635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 17:45:23.390328  213635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 17:45:23.390891  213635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:45:23.390939  213635 cni.go:84] Creating CNI manager for ""
	I0414 17:45:23.390997  213635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:23.391057  213635 start.go:340] cluster config:
	{Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:23.391177  213635 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 17:45:23.393503  213635 out.go:177] * Starting "old-k8s-version-768580" primary control-plane node in "old-k8s-version-768580" cluster
	I0414 17:45:18.829481  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Start
	I0414 17:45:18.829626  213406 main.go:141] libmachine: (embed-certs-418468) starting domain...
	I0414 17:45:18.829645  213406 main.go:141] libmachine: (embed-certs-418468) ensuring networks are active...
	I0414 17:45:18.830375  213406 main.go:141] libmachine: (embed-certs-418468) Ensuring network default is active
	I0414 17:45:18.830697  213406 main.go:141] libmachine: (embed-certs-418468) Ensuring network mk-embed-certs-418468 is active
	I0414 17:45:18.831060  213406 main.go:141] libmachine: (embed-certs-418468) getting domain XML...
	I0414 17:45:18.831881  213406 main.go:141] libmachine: (embed-certs-418468) creating domain...
	I0414 17:45:20.130585  213406 main.go:141] libmachine: (embed-certs-418468) waiting for IP...
	I0414 17:45:20.131429  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.131906  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.131976  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.131884  213441 retry.go:31] will retry after 192.442813ms: waiting for domain to come up
	I0414 17:45:20.326250  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.326808  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.326847  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.326777  213441 retry.go:31] will retry after 380.44265ms: waiting for domain to come up
	I0414 17:45:20.709212  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:20.709718  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:20.709747  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:20.709659  213441 retry.go:31] will retry after 412.048423ms: waiting for domain to come up
	I0414 17:45:21.123129  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:21.123522  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:21.123544  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:21.123486  213441 retry.go:31] will retry after 384.561435ms: waiting for domain to come up
	I0414 17:45:21.510029  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:21.510559  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:21.510591  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:21.510521  213441 retry.go:31] will retry after 501.73701ms: waiting for domain to come up
	I0414 17:45:22.014298  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:22.014882  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:22.014914  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:22.014842  213441 retry.go:31] will retry after 757.183938ms: waiting for domain to come up
	I0414 17:45:22.773705  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:22.774323  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:22.774350  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:22.774269  213441 retry.go:31] will retry after 986.137988ms: waiting for domain to come up
	I0414 17:45:20.888278  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:23.386664  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:24.646290  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:27.145214  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:23.394590  213635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:45:23.394621  213635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 17:45:23.394628  213635 cache.go:56] Caching tarball of preloaded images
	I0414 17:45:23.394721  213635 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 17:45:23.394735  213635 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 17:45:23.394836  213635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:45:23.395013  213635 start.go:360] acquireMachinesLock for old-k8s-version-768580: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 17:45:23.762349  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:23.762955  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:23.762979  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:23.762917  213441 retry.go:31] will retry after 1.10793688s: waiting for domain to come up
	I0414 17:45:24.872355  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:24.872838  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:24.872868  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:24.872798  213441 retry.go:31] will retry after 1.289889749s: waiting for domain to come up
	I0414 17:45:26.163838  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:26.164300  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:26.164340  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:26.164276  213441 retry.go:31] will retry after 1.779294897s: waiting for domain to come up
	I0414 17:45:27.946417  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:27.946918  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:27.946955  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:27.946893  213441 retry.go:31] will retry after 1.873070528s: waiting for domain to come up
	I0414 17:45:25.887339  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:27.888458  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:30.386702  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:29.147468  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:31.647410  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:29.821493  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:29.822082  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:29.822114  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:29.822017  213441 retry.go:31] will retry after 2.200299666s: waiting for domain to come up
	I0414 17:45:32.024275  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:32.024774  213406 main.go:141] libmachine: (embed-certs-418468) DBG | unable to find current IP address of domain embed-certs-418468 in network mk-embed-certs-418468
	I0414 17:45:32.024804  213406 main.go:141] libmachine: (embed-certs-418468) DBG | I0414 17:45:32.024731  213441 retry.go:31] will retry after 4.490034828s: waiting for domain to come up
	I0414 17:45:32.885679  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:34.886662  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:34.145579  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:36.146382  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.146697  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.262514  213635 start.go:364] duration metric: took 14.867477628s to acquireMachinesLock for "old-k8s-version-768580"
	I0414 17:45:38.262567  213635 start.go:96] Skipping create...Using existing machine configuration
	I0414 17:45:38.262576  213635 fix.go:54] fixHost starting: 
	I0414 17:45:38.262931  213635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:45:38.262975  213635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:45:38.282724  213635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0414 17:45:38.283218  213635 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:45:38.283779  213635 main.go:141] libmachine: Using API Version  1
	I0414 17:45:38.283810  213635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:45:38.284194  213635 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:45:38.284403  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:38.284564  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetState
	I0414 17:45:38.285903  213635 fix.go:112] recreateIfNeeded on old-k8s-version-768580: state=Stopped err=<nil>
	I0414 17:45:38.285937  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	W0414 17:45:38.286051  213635 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 17:45:38.287537  213635 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-768580" ...
	I0414 17:45:36.517497  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.518002  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has current primary IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.518029  213406 main.go:141] libmachine: (embed-certs-418468) found domain IP: 192.168.50.199
	I0414 17:45:36.518042  213406 main.go:141] libmachine: (embed-certs-418468) reserving static IP address...
	I0414 17:45:36.518423  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "embed-certs-418468", mac: "52:54:00:2f:33:03", ip: "192.168.50.199"} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.518454  213406 main.go:141] libmachine: (embed-certs-418468) DBG | skip adding static IP to network mk-embed-certs-418468 - found existing host DHCP lease matching {name: "embed-certs-418468", mac: "52:54:00:2f:33:03", ip: "192.168.50.199"}
	I0414 17:45:36.518467  213406 main.go:141] libmachine: (embed-certs-418468) reserved static IP address 192.168.50.199 for domain embed-certs-418468
	I0414 17:45:36.518485  213406 main.go:141] libmachine: (embed-certs-418468) waiting for SSH...
	I0414 17:45:36.518500  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Getting to WaitForSSH function...
	I0414 17:45:36.520360  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.520616  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.520653  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.520758  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Using SSH client type: external
	I0414 17:45:36.520776  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa (-rw-------)
	I0414 17:45:36.520809  213406 main.go:141] libmachine: (embed-certs-418468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:45:36.520821  213406 main.go:141] libmachine: (embed-certs-418468) DBG | About to run SSH command:
	I0414 17:45:36.520831  213406 main.go:141] libmachine: (embed-certs-418468) DBG | exit 0
	I0414 17:45:36.649576  213406 main.go:141] libmachine: (embed-certs-418468) DBG | SSH cmd err, output: <nil>: 
	I0414 17:45:36.649973  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetConfigRaw
	I0414 17:45:36.650596  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:36.653078  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.653409  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.653438  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.653654  213406 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/config.json ...
	I0414 17:45:36.653850  213406 machine.go:93] provisionDockerMachine start ...
	I0414 17:45:36.653883  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:36.654093  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.656193  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.656501  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.656527  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.656658  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.656818  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.656950  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.657070  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.657214  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.657429  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.657439  213406 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:45:36.765740  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 17:45:36.765765  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:36.766013  213406 buildroot.go:166] provisioning hostname "embed-certs-418468"
	I0414 17:45:36.766041  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:36.766237  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.768833  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.769137  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.769162  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.769335  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.769500  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.769623  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.769731  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.769886  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.770105  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.770120  213406 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-418468 && echo "embed-certs-418468" | sudo tee /etc/hostname
	I0414 17:45:36.893279  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-418468
	
	I0414 17:45:36.893301  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:36.896024  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.896386  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:36.896415  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:36.896583  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:36.896764  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.896953  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:36.897101  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:36.897270  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:36.897545  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:36.897570  213406 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-418468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-418468/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-418468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:45:37.024782  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:45:37.024811  213406 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:45:37.024840  213406 buildroot.go:174] setting up certificates
	I0414 17:45:37.024850  213406 provision.go:84] configureAuth start
	I0414 17:45:37.024858  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetMachineName
	I0414 17:45:37.025122  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:37.027788  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.028176  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.028213  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.028409  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.030616  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.030956  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.030981  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.031177  213406 provision.go:143] copyHostCerts
	I0414 17:45:37.031234  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:45:37.031248  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:45:37.031310  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:45:37.031401  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:45:37.031409  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:45:37.031435  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:45:37.031497  213406 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:45:37.031504  213406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:45:37.031523  213406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:45:37.031647  213406 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.embed-certs-418468 san=[127.0.0.1 192.168.50.199 embed-certs-418468 localhost minikube]
	I0414 17:45:37.627895  213406 provision.go:177] copyRemoteCerts
	I0414 17:45:37.627953  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:45:37.627976  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.630648  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.630947  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.630970  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.631155  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:37.631352  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.631526  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:37.631687  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:37.716473  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 17:45:37.739929  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:45:37.762662  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 17:45:37.785121  213406 provision.go:87] duration metric: took 760.257482ms to configureAuth
	I0414 17:45:37.785152  213406 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:45:37.785381  213406 config.go:182] Loaded profile config "embed-certs-418468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:45:37.785455  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:37.788353  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.788678  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:37.788705  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:37.788883  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:37.789017  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.789194  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:37.789409  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:37.789591  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:37.789865  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:37.789886  213406 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:45:38.021469  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:45:38.021530  213406 machine.go:96] duration metric: took 1.367637028s to provisionDockerMachine
	I0414 17:45:38.021548  213406 start.go:293] postStartSetup for "embed-certs-418468" (driver="kvm2")
	I0414 17:45:38.021567  213406 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:45:38.021593  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.021949  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:45:38.021980  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.024762  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.025141  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.025169  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.025357  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.025523  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.025702  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.025862  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.112512  213406 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:45:38.116757  213406 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:45:38.116780  213406 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:45:38.116832  213406 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:45:38.116909  213406 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:45:38.116994  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:45:38.126428  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:38.149529  213406 start.go:296] duration metric: took 127.965801ms for postStartSetup
	I0414 17:45:38.149559  213406 fix.go:56] duration metric: took 19.339332592s for fixHost
	I0414 17:45:38.149597  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.152452  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.152857  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.152886  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.153029  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.153208  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.153357  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.153527  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.153719  213406 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:38.153980  213406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.199 22 <nil> <nil>}
	I0414 17:45:38.153992  213406 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:45:38.262398  213406 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652738.233356501
	
	I0414 17:45:38.262419  213406 fix.go:216] guest clock: 1744652738.233356501
	I0414 17:45:38.262426  213406 fix.go:229] Guest: 2025-04-14 17:45:38.233356501 +0000 UTC Remote: 2025-04-14 17:45:38.149564097 +0000 UTC m=+19.473974968 (delta=83.792404ms)
	I0414 17:45:38.262443  213406 fix.go:200] guest clock delta is within tolerance: 83.792404ms
	I0414 17:45:38.262448  213406 start.go:83] releasing machines lock for "embed-certs-418468", held for 19.452231962s
	I0414 17:45:38.262473  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.262756  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:38.265776  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.266164  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.266194  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.266350  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.266870  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.267040  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:45:38.267139  213406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:45:38.267189  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.267240  213406 ssh_runner.go:195] Run: cat /version.json
	I0414 17:45:38.267261  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:45:38.269779  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270093  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.270121  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270142  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270286  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.270481  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.270582  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:38.270601  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:38.270633  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.270844  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:45:38.270834  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.270994  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:45:38.271141  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:45:38.271286  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:45:38.360262  213406 ssh_runner.go:195] Run: systemctl --version
	I0414 17:45:38.384263  213406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:45:38.531682  213406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:45:38.539705  213406 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:45:38.539793  213406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:45:38.557292  213406 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:45:38.557314  213406 start.go:495] detecting cgroup driver to use...
	I0414 17:45:38.557377  213406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:45:38.573739  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:45:38.587350  213406 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:45:38.587392  213406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:45:38.601142  213406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:45:38.615569  213406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:45:38.729585  213406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:45:38.866071  213406 docker.go:233] disabling docker service ...
	I0414 17:45:38.866151  213406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:45:38.881173  213406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:45:38.895808  213406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:45:39.055748  213406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:45:39.185218  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:45:39.200427  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:45:39.223755  213406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 17:45:39.223823  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.235661  213406 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:45:39.235737  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.248125  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.260302  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.270988  213406 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:45:39.281488  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.293593  213406 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.314797  213406 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:39.325696  213406 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:45:39.334593  213406 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:45:39.334634  213406 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:45:39.347505  213406 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:45:39.357965  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:39.484049  213406 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:45:39.597745  213406 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:45:39.597853  213406 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:45:39.602871  213406 start.go:563] Will wait 60s for crictl version
	I0414 17:45:39.602925  213406 ssh_runner.go:195] Run: which crictl
	I0414 17:45:39.606796  213406 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:45:39.649955  213406 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:45:39.650046  213406 ssh_runner.go:195] Run: crio --version
	I0414 17:45:39.681673  213406 ssh_runner.go:195] Run: crio --version
	I0414 17:45:39.710974  213406 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 17:45:36.888095  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:39.387438  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:40.148510  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:42.647398  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:38.288730  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .Start
	I0414 17:45:38.288903  213635 main.go:141] libmachine: (old-k8s-version-768580) starting domain...
	I0414 17:45:38.288928  213635 main.go:141] libmachine: (old-k8s-version-768580) ensuring networks are active...
	I0414 17:45:38.289671  213635 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network default is active
	I0414 17:45:38.290082  213635 main.go:141] libmachine: (old-k8s-version-768580) Ensuring network mk-old-k8s-version-768580 is active
	I0414 17:45:38.290509  213635 main.go:141] libmachine: (old-k8s-version-768580) getting domain XML...
	I0414 17:45:38.291270  213635 main.go:141] libmachine: (old-k8s-version-768580) creating domain...
	I0414 17:45:39.584359  213635 main.go:141] libmachine: (old-k8s-version-768580) waiting for IP...
	I0414 17:45:39.585518  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:39.586108  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:39.586195  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:39.586107  213733 retry.go:31] will retry after 251.417692ms: waiting for domain to come up
	I0414 17:45:39.839778  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:39.840371  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:39.840397  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:39.840338  213733 retry.go:31] will retry after 258.330025ms: waiting for domain to come up
	I0414 17:45:40.100989  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:40.101667  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:40.101696  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:40.101631  213733 retry.go:31] will retry after 334.368733ms: waiting for domain to come up
	I0414 17:45:40.437266  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:40.438218  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:40.438251  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:40.438188  213733 retry.go:31] will retry after 588.313555ms: waiting for domain to come up
	I0414 17:45:41.027969  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:41.028685  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:41.028713  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:41.028667  213733 retry.go:31] will retry after 582.787602ms: waiting for domain to come up
	I0414 17:45:41.613756  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:41.614424  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:41.614476  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:41.614383  213733 retry.go:31] will retry after 695.01431ms: waiting for domain to come up
	I0414 17:45:42.311573  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:42.312134  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:42.312168  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:42.312092  213733 retry.go:31] will retry after 1.050124039s: waiting for domain to come up
	I0414 17:45:39.712262  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetIP
	I0414 17:45:39.715292  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:39.715742  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:45:39.715790  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:45:39.715889  213406 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 17:45:39.720056  213406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:39.736486  213406 kubeadm.go:883] updating cluster {Name:embed-certs-418468 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-
418468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:45:39.736610  213406 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 17:45:39.736663  213406 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:39.774478  213406 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 17:45:39.774571  213406 ssh_runner.go:195] Run: which lz4
	I0414 17:45:39.778933  213406 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:45:39.783254  213406 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:45:39.783294  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 17:45:41.221460  213406 crio.go:462] duration metric: took 1.44257368s to copy over tarball
	I0414 17:45:41.221534  213406 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:45:43.485855  213406 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.264254914s)
	I0414 17:45:43.485888  213406 crio.go:469] duration metric: took 2.264398504s to extract the tarball
	I0414 17:45:43.485899  213406 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:45:43.525207  213406 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:43.573036  213406 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 17:45:43.573060  213406 cache_images.go:84] Images are preloaded, skipping loading
	I0414 17:45:43.573068  213406 kubeadm.go:934] updating node { 192.168.50.199 8443 v1.32.2 crio true true} ...
	I0414 17:45:43.573156  213406 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-418468 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-418468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:45:43.573214  213406 ssh_runner.go:195] Run: crio config
	I0414 17:45:43.633728  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:45:43.633753  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:43.633765  213406 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:45:43.633791  213406 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.199 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-418468 NodeName:embed-certs-418468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 17:45:43.633949  213406 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-418468"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.199"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.199"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:45:43.634013  213406 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 17:45:43.644883  213406 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:45:43.644955  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:45:43.658054  213406 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0414 17:45:43.678542  213406 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:45:43.698007  213406 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0414 17:45:41.888968  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:44.387515  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:45.147015  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:47.147667  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:43.363977  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:43.364593  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:43.364642  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:43.364568  213733 retry.go:31] will retry after 1.011314768s: waiting for domain to come up
	I0414 17:45:44.377753  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:44.378268  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:44.378293  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:44.378225  213733 retry.go:31] will retry after 1.856494831s: waiting for domain to come up
	I0414 17:45:46.237268  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:46.237851  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:46.237881  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:46.237785  213733 retry.go:31] will retry after 1.749079149s: waiting for domain to come up
	I0414 17:45:47.990039  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:47.990637  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:47.990670  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:47.990601  213733 retry.go:31] will retry after 2.63350321s: waiting for domain to come up
	I0414 17:45:43.715966  213406 ssh_runner.go:195] Run: grep 192.168.50.199	control-plane.minikube.internal$ /etc/hosts
	I0414 17:45:43.720022  213406 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:43.733445  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:43.867405  213406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:45:43.885300  213406 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468 for IP: 192.168.50.199
	I0414 17:45:43.885324  213406 certs.go:194] generating shared ca certs ...
	I0414 17:45:43.885345  213406 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:45:43.885512  213406 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:45:43.885584  213406 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:45:43.885601  213406 certs.go:256] generating profile certs ...
	I0414 17:45:43.885706  213406 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/client.key
	I0414 17:45:43.885782  213406 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.key.3a11cdbe
	I0414 17:45:43.885845  213406 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.key
	I0414 17:45:43.885996  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:45:43.886046  213406 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:45:43.886061  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:45:43.886092  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:45:43.886126  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:45:43.886156  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:45:43.886211  213406 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:43.886983  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:45:43.924611  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:45:43.964084  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:45:43.987697  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:45:44.015900  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 17:45:44.040754  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:45:44.075038  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:45:44.099117  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/embed-certs-418468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 17:45:44.122932  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:45:44.147023  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:45:44.173790  213406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:45:44.196542  213406 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:45:44.214709  213406 ssh_runner.go:195] Run: openssl version
	I0414 17:45:44.220535  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:45:44.235491  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.240204  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.240265  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:45:44.246067  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:45:44.257501  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:45:44.269005  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.273740  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.273793  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:45:44.279740  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:45:44.291167  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:45:44.302992  213406 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.307551  213406 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.307597  213406 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:45:44.313737  213406 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:45:44.324505  213406 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:45:44.328835  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:45:44.334805  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:45:44.340659  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:45:44.346307  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:45:44.351874  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:45:44.357745  213406 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:45:44.363409  213406 kubeadm.go:392] StartCluster: {Name:embed-certs-418468 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-418
468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:45:44.363503  213406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:45:44.363553  213406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:45:44.409542  213406 cri.go:89] found id: ""
	I0414 17:45:44.409612  213406 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:45:44.421483  213406 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 17:45:44.421502  213406 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 17:45:44.421553  213406 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 17:45:44.432611  213406 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:45:44.433322  213406 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-418468" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:45:44.433670  213406 kubeconfig.go:62] /home/jenkins/minikube-integration/20349-149500/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-418468" cluster setting kubeconfig missing "embed-certs-418468" context setting]
	I0414 17:45:44.434350  213406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:45:44.435960  213406 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 17:45:44.447295  213406 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.199
	I0414 17:45:44.447335  213406 kubeadm.go:1160] stopping kube-system containers ...
	I0414 17:45:44.447349  213406 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 17:45:44.447402  213406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:45:44.483842  213406 cri.go:89] found id: ""
	I0414 17:45:44.483928  213406 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:45:44.501457  213406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:45:44.511344  213406 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:45:44.511360  213406 kubeadm.go:157] found existing configuration files:
	
	I0414 17:45:44.511408  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:45:44.520512  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:45:44.520561  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:45:44.530434  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:45:44.539618  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:45:44.539668  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:45:44.548947  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:45:44.558310  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:45:44.558380  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:45:44.567691  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:45:44.576750  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:45:44.576795  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:45:44.586464  213406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:45:44.598983  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:44.718594  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:45.695980  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:45.996480  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:46.072138  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:46.200254  213406 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:45:46.200333  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:46.701083  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:47.201283  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:45:47.253490  213406 api_server.go:72] duration metric: took 1.053227948s to wait for apiserver process to appear ...
	I0414 17:45:47.253532  213406 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:45:47.253571  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:47.254266  213406 api_server.go:269] stopped: https://192.168.50.199:8443/healthz: Get "https://192.168.50.199:8443/healthz": dial tcp 192.168.50.199:8443: connect: connection refused
	I0414 17:45:47.753924  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:46.704844  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:48.887470  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:50.393514  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:45:50.393621  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:45:50.393644  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:50.433133  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 17:45:50.433159  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 17:45:50.753606  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:50.758868  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:45:50.758895  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:45:51.254607  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:51.259648  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 17:45:51.259677  213406 api_server.go:103] status: https://192.168.50.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 17:45:51.754419  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:45:51.762365  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 200:
	ok
	I0414 17:45:51.774330  213406 api_server.go:141] control plane version: v1.32.2
	I0414 17:45:51.774361  213406 api_server.go:131] duration metric: took 4.520816141s to wait for apiserver health ...
	I0414 17:45:51.774374  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:45:51.774383  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:45:51.775864  213406 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:45:49.648757  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:52.147610  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:50.626885  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:50.627340  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:50.627368  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:50.627294  213733 retry.go:31] will retry after 2.57658473s: waiting for domain to come up
	I0414 17:45:53.207057  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:53.207562  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | unable to find current IP address of domain old-k8s-version-768580 in network mk-old-k8s-version-768580
	I0414 17:45:53.207590  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | I0414 17:45:53.207520  213733 retry.go:31] will retry after 3.448748827s: waiting for domain to come up
	I0414 17:45:51.777039  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:45:51.806959  213406 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:45:51.836511  213406 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:45:51.848209  213406 system_pods.go:59] 8 kube-system pods found
	I0414 17:45:51.848270  213406 system_pods.go:61] "coredns-668d6bf9bc-z4n2r" [ee9fd5dc-3f74-4c37-8e96-c5ef30b99046] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:45:51.848284  213406 system_pods.go:61] "etcd-embed-certs-418468" [4622769e-1912-4b04-84c3-5dea86d25184] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0414 17:45:51.848301  213406 system_pods.go:61] "kube-apiserver-embed-certs-418468" [266cb804-e782-479b-8dac-132b529e46f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 17:45:51.848319  213406 system_pods.go:61] "kube-controller-manager-embed-certs-418468" [ba3c123b-8919-45cc-96aa-cdd449e77762] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0414 17:45:51.848328  213406 system_pods.go:61] "kube-proxy-6dft2" [f97366b9-4a39-4659-8e3b-c551085e33d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 17:45:51.848340  213406 system_pods.go:61] "kube-scheduler-embed-certs-418468" [12a8ba4d-1e6d-445c-b170-d36f15352271] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0414 17:45:51.848350  213406 system_pods.go:61] "metrics-server-f79f97bbb-9vnsg" [95cc235a-e21c-4a97-9334-d5030b9097d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:45:51.848359  213406 system_pods.go:61] "storage-provisioner" [c969e5f7-a7dc-441f-b8eb-2c3af1803f32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 17:45:51.848371  213406 system_pods.go:74] duration metric: took 11.836623ms to wait for pod list to return data ...
	I0414 17:45:51.848386  213406 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:45:51.868743  213406 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:45:51.868781  213406 node_conditions.go:123] node cpu capacity is 2
	I0414 17:45:51.868805  213406 node_conditions.go:105] duration metric: took 20.412892ms to run NodePressure ...
	I0414 17:45:51.868835  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:45:52.239201  213406 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 17:45:52.242855  213406 kubeadm.go:739] kubelet initialised
	I0414 17:45:52.242878  213406 kubeadm.go:740] duration metric: took 3.647876ms waiting for restarted kubelet to initialise ...
	I0414 17:45:52.242889  213406 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:45:52.245160  213406 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:51.386891  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:53.895571  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:54.645821  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.646257  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.658750  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.659197  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has current primary IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.659235  213635 main.go:141] libmachine: (old-k8s-version-768580) found domain IP: 192.168.72.58
	I0414 17:45:56.659245  213635 main.go:141] libmachine: (old-k8s-version-768580) reserving static IP address...
	I0414 17:45:56.659616  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.659642  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | skip adding static IP to network mk-old-k8s-version-768580 - found existing host DHCP lease matching {name: "old-k8s-version-768580", mac: "52:54:00:d8:47:6d", ip: "192.168.72.58"}
	I0414 17:45:56.659654  213635 main.go:141] libmachine: (old-k8s-version-768580) reserved static IP address 192.168.72.58 for domain old-k8s-version-768580
	I0414 17:45:56.659671  213635 main.go:141] libmachine: (old-k8s-version-768580) waiting for SSH...
	I0414 17:45:56.659708  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Getting to WaitForSSH function...
	I0414 17:45:56.661714  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.662056  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.662087  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.662202  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH client type: external
	I0414 17:45:56.662226  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa (-rw-------)
	I0414 17:45:56.662273  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 17:45:56.662292  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | About to run SSH command:
	I0414 17:45:56.662309  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | exit 0
	I0414 17:45:56.781680  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | SSH cmd err, output: <nil>: 
	I0414 17:45:56.782109  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetConfigRaw
	I0414 17:45:56.782751  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:56.785158  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.785469  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.785502  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.785736  213635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/config.json ...
	I0414 17:45:56.785961  213635 machine.go:93] provisionDockerMachine start ...
	I0414 17:45:56.785980  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:56.786175  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:56.788189  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.788560  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.788585  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.788720  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:56.788874  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.789008  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.789162  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:56.789316  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:56.789519  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:56.789529  213635 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 17:45:56.890137  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 17:45:56.890168  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:56.890394  213635 buildroot.go:166] provisioning hostname "old-k8s-version-768580"
	I0414 17:45:56.890418  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:56.890619  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:56.892966  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.893390  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:56.893410  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:56.893563  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:56.893750  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.893919  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:56.894061  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:56.894207  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:56.894529  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:56.894549  213635 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-768580 && echo "old-k8s-version-768580" | sudo tee /etc/hostname
	I0414 17:45:57.008447  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-768580
	
	I0414 17:45:57.008471  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.011111  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.011428  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.011469  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.011584  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.011804  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.011985  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.012096  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.012205  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:57.012392  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:57.012407  213635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-768580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-768580/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-768580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 17:45:57.132689  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 17:45:57.132739  213635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
	I0414 17:45:57.132763  213635 buildroot.go:174] setting up certificates
	I0414 17:45:57.132773  213635 provision.go:84] configureAuth start
	I0414 17:45:57.132784  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetMachineName
	I0414 17:45:57.133116  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:57.136014  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.136345  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.136374  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.136550  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.139546  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.140028  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.140059  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.140266  213635 provision.go:143] copyHostCerts
	I0414 17:45:57.140335  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem, removing ...
	I0414 17:45:57.140361  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem
	I0414 17:45:57.140462  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
	I0414 17:45:57.140589  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem, removing ...
	I0414 17:45:57.140603  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem
	I0414 17:45:57.140655  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
	I0414 17:45:57.140743  213635 exec_runner.go:144] found /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem, removing ...
	I0414 17:45:57.140761  213635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem
	I0414 17:45:57.140798  213635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
	I0414 17:45:57.140884  213635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-768580 san=[127.0.0.1 192.168.72.58 localhost minikube old-k8s-version-768580]
	I0414 17:45:57.638227  213635 provision.go:177] copyRemoteCerts
	I0414 17:45:57.638317  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 17:45:57.638348  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.641173  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.641530  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.641563  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.641714  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.641916  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.642092  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.642232  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:57.724240  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 17:45:57.749634  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 17:45:57.776416  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 17:45:57.801692  213635 provision.go:87] duration metric: took 668.902854ms to configureAuth
	I0414 17:45:57.801722  213635 buildroot.go:189] setting minikube options for container-runtime
	I0414 17:45:57.801958  213635 config.go:182] Loaded profile config "old-k8s-version-768580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 17:45:57.802054  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:57.804673  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.805023  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:57.805051  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:57.805250  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:57.805434  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.805597  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:57.805715  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:57.805892  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:57.806134  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:57.806153  213635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 17:45:58.022403  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 17:45:58.022437  213635 machine.go:96] duration metric: took 1.236460782s to provisionDockerMachine
	I0414 17:45:58.022452  213635 start.go:293] postStartSetup for "old-k8s-version-768580" (driver="kvm2")
	I0414 17:45:58.022466  213635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 17:45:58.022505  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.022841  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 17:45:58.022875  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.025802  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.026223  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.026254  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.026507  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.026657  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.026765  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.026909  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.112706  213635 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 17:45:58.117225  213635 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 17:45:58.117253  213635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
	I0414 17:45:58.117324  213635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
	I0414 17:45:58.117416  213635 filesync.go:149] local asset: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem -> 1566332.pem in /etc/ssl/certs
	I0414 17:45:58.117503  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 17:45:58.128036  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:45:58.152497  213635 start.go:296] duration metric: took 130.019138ms for postStartSetup
	I0414 17:45:58.152538  213635 fix.go:56] duration metric: took 19.889962017s for fixHost
	I0414 17:45:58.152587  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.155565  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.156016  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.156050  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.156233  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.156440  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.156667  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.156863  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.157079  213635 main.go:141] libmachine: Using SSH client type: native
	I0414 17:45:58.157365  213635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I0414 17:45:58.157380  213635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 17:45:58.262578  213635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744652758.231554158
	
	I0414 17:45:58.262603  213635 fix.go:216] guest clock: 1744652758.231554158
	I0414 17:45:58.262612  213635 fix.go:229] Guest: 2025-04-14 17:45:58.231554158 +0000 UTC Remote: 2025-04-14 17:45:58.152542501 +0000 UTC m=+34.908827189 (delta=79.011657ms)
	I0414 17:45:58.262635  213635 fix.go:200] guest clock delta is within tolerance: 79.011657ms
	I0414 17:45:58.262641  213635 start.go:83] releasing machines lock for "old-k8s-version-768580", held for 20.000092548s
	I0414 17:45:58.262660  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.262963  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:58.265585  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.265964  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.266004  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.266157  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266649  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266849  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .DriverName
	I0414 17:45:58.266978  213635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 17:45:58.267030  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.267047  213635 ssh_runner.go:195] Run: cat /version.json
	I0414 17:45:58.267073  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHHostname
	I0414 17:45:58.269647  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.269715  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270071  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.270098  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270124  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:58.270157  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:58.270238  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.270344  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHPort
	I0414 17:45:58.270424  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.270497  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHKeyPath
	I0414 17:45:58.270566  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.270678  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:58.270730  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetSSHUsername
	I0414 17:45:58.270836  213635 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/old-k8s-version-768580/id_rsa Username:docker}
	I0414 17:45:54.250565  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:56.250955  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.251402  213406 pod_ready.go:103] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.343285  213635 ssh_runner.go:195] Run: systemctl --version
	I0414 17:45:58.367988  213635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 17:45:58.519539  213635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 17:45:58.526018  213635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 17:45:58.526083  213635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 17:45:58.542624  213635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 17:45:58.542648  213635 start.go:495] detecting cgroup driver to use...
	I0414 17:45:58.542718  213635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 17:45:58.558731  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 17:45:58.572169  213635 docker.go:217] disabling cri-docker service (if available) ...
	I0414 17:45:58.572211  213635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 17:45:58.585163  213635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 17:45:58.598940  213635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 17:45:58.721667  213635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 17:45:58.879281  213635 docker.go:233] disabling docker service ...
	I0414 17:45:58.879343  213635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 17:45:58.896126  213635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 17:45:58.908836  213635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 17:45:59.033428  213635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 17:45:59.166628  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 17:45:59.181684  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 17:45:59.200617  213635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 17:45:59.200680  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.211541  213635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 17:45:59.211600  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.223657  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.235487  213635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 17:45:59.248000  213635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 17:45:59.261365  213635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 17:45:59.273037  213635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 17:45:59.273132  213635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 17:45:59.288901  213635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 17:45:59.300042  213635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:45:59.423635  213635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 17:45:59.529685  213635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 17:45:59.529758  213635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 17:45:59.534592  213635 start.go:563] Will wait 60s for crictl version
	I0414 17:45:59.534640  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:45:59.538651  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 17:45:59.578522  213635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 17:45:59.578595  213635 ssh_runner.go:195] Run: crio --version
	I0414 17:45:59.605740  213635 ssh_runner.go:195] Run: crio --version
	I0414 17:45:59.635045  213635 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 17:45:56.385712  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.386662  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:00.388088  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:58.647473  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:01.146666  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:45:59.636069  213635 main.go:141] libmachine: (old-k8s-version-768580) Calling .GetIP
	I0414 17:45:59.638462  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:59.638803  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:47:6d", ip: ""} in network mk-old-k8s-version-768580: {Iface:virbr4 ExpiryTime:2025-04-14 18:39:34 +0000 UTC Type:0 Mac:52:54:00:d8:47:6d Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:old-k8s-version-768580 Clientid:01:52:54:00:d8:47:6d}
	I0414 17:45:59.638829  213635 main.go:141] libmachine: (old-k8s-version-768580) DBG | domain old-k8s-version-768580 has defined IP address 192.168.72.58 and MAC address 52:54:00:d8:47:6d in network mk-old-k8s-version-768580
	I0414 17:45:59.639064  213635 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 17:45:59.643370  213635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:45:59.657222  213635 kubeadm.go:883] updating cluster {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 17:45:59.657362  213635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 17:45:59.657409  213635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:45:59.704172  213635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:45:59.704247  213635 ssh_runner.go:195] Run: which lz4
	I0414 17:45:59.708554  213635 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 17:45:59.712850  213635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 17:45:59.712882  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 17:46:01.354039  213635 crio.go:462] duration metric: took 1.645520081s to copy over tarball
	I0414 17:46:01.354112  213635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 17:45:59.252026  213406 pod_ready.go:93] pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace has status "Ready":"True"
	I0414 17:45:59.252050  213406 pod_ready.go:82] duration metric: took 7.006866592s for pod "coredns-668d6bf9bc-z4n2r" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.252074  213406 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.255615  213406 pod_ready.go:93] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:45:59.255638  213406 pod_ready.go:82] duration metric: took 3.555461ms for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:45:59.255649  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:01.263173  213406 pod_ready.go:103] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:02.887635  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:05.387807  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:03.646378  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:05.647729  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:08.146880  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:04.261653  213635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.907516994s)
	I0414 17:46:04.261683  213635 crio.go:469] duration metric: took 2.907610683s to extract the tarball
	I0414 17:46:04.261695  213635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 17:46:04.307964  213635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 17:46:04.345077  213635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 17:46:04.345112  213635 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 17:46:04.345199  213635 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:04.345203  213635 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.345239  213635 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.345249  213635 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 17:46:04.345318  213635 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.345321  213635 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.345209  213635 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.345436  213635 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.347103  213635 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.347115  213635 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.347128  213635 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.347132  213635 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.347093  213635 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.347109  213635 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.347093  213635 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 17:46:04.347164  213635 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:04.489472  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.490905  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.494468  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.498887  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.499207  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.503007  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.528129  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 17:46:04.591926  213635 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 17:46:04.591983  213635 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.592033  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.628524  213635 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 17:46:04.628568  213635 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.628604  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691347  213635 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 17:46:04.691455  213635 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.691347  213635 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 17:46:04.691571  213635 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.691392  213635 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 17:46:04.691634  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691661  213635 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.691393  213635 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 17:46:04.691706  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691731  213635 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.691759  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.691509  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.696665  213635 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 17:46:04.696697  213635 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 17:46:04.696714  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.696727  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.696730  213635 ssh_runner.go:195] Run: which crictl
	I0414 17:46:04.707222  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.707277  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.709851  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.710042  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.834502  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:04.834653  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.834668  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.856960  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:04.857034  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:04.857094  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:04.857179  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:04.983051  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 17:46:04.983060  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 17:46:04.983060  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:05.024632  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 17:46:05.024779  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 17:46:05.031272  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 17:46:05.031399  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 17:46:05.161869  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 17:46:05.170557  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 17:46:05.170702  213635 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 17:46:05.195041  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 17:46:05.195041  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 17:46:05.208270  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 17:46:05.208341  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 17:46:05.220290  213635 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 17:46:05.331240  213635 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:46:05.471903  213635 cache_images.go:92] duration metric: took 1.126766183s to LoadCachedImages
	W0414 17:46:05.471974  213635 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20349-149500/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0414 17:46:05.471985  213635 kubeadm.go:934] updating node { 192.168.72.58 8443 v1.20.0 crio true true} ...
	I0414 17:46:05.472082  213635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-768580 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 17:46:05.472172  213635 ssh_runner.go:195] Run: crio config
	I0414 17:46:05.531642  213635 cni.go:84] Creating CNI manager for ""
	I0414 17:46:05.531667  213635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:46:05.531678  213635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 17:46:05.531697  213635 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.58 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-768580 NodeName:old-k8s-version-768580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 17:46:05.531815  213635 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-768580"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 17:46:05.531897  213635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 17:46:05.542769  213635 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 17:46:05.542861  213635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 17:46:05.552930  213635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0414 17:46:05.570087  213635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 17:46:05.588483  213635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0414 17:46:05.606443  213635 ssh_runner.go:195] Run: grep 192.168.72.58	control-plane.minikube.internal$ /etc/hosts
	I0414 17:46:05.610756  213635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 17:46:05.622873  213635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:46:05.770402  213635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:46:05.789353  213635 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580 for IP: 192.168.72.58
	I0414 17:46:05.789374  213635 certs.go:194] generating shared ca certs ...
	I0414 17:46:05.789395  213635 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:46:05.789542  213635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
	I0414 17:46:05.789598  213635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
	I0414 17:46:05.789613  213635 certs.go:256] generating profile certs ...
	I0414 17:46:05.789717  213635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/client.key
	I0414 17:46:05.789816  213635 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key.0f5f550a
	I0414 17:46:05.789911  213635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key
	I0414 17:46:05.790030  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem (1338 bytes)
	W0414 17:46:05.790067  213635 certs.go:480] ignoring /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633_empty.pem, impossibly tiny 0 bytes
	I0414 17:46:05.790077  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 17:46:05.790130  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
	I0414 17:46:05.790163  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
	I0414 17:46:05.790195  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
	I0414 17:46:05.790251  213635 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem (1708 bytes)
	I0414 17:46:05.790829  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 17:46:05.852348  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 17:46:05.879909  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 17:46:05.924274  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 17:46:05.968318  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 17:46:06.004046  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 17:46:06.039672  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 17:46:06.068041  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/old-k8s-version-768580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 17:46:06.093159  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/156633.pem --> /usr/share/ca-certificates/156633.pem (1338 bytes)
	I0414 17:46:06.118949  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/ssl/certs/1566332.pem --> /usr/share/ca-certificates/1566332.pem (1708 bytes)
	I0414 17:46:06.144480  213635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 17:46:06.171159  213635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 17:46:06.189499  213635 ssh_runner.go:195] Run: openssl version
	I0414 17:46:06.196060  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156633.pem && ln -fs /usr/share/ca-certificates/156633.pem /etc/ssl/certs/156633.pem"
	I0414 17:46:06.206864  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.211352  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 16:39 /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.211407  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156633.pem
	I0414 17:46:06.217759  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/156633.pem /etc/ssl/certs/51391683.0"
	I0414 17:46:06.228546  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1566332.pem && ln -fs /usr/share/ca-certificates/1566332.pem /etc/ssl/certs/1566332.pem"
	I0414 17:46:06.239146  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.243457  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 16:39 /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.243511  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1566332.pem
	I0414 17:46:06.249141  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1566332.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 17:46:06.259582  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 17:46:06.269988  213635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.275271  213635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.275324  213635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 17:46:06.282428  213635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 17:46:06.293404  213635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 17:46:06.298115  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 17:46:06.304513  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 17:46:06.310675  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 17:46:06.317218  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 17:46:06.324114  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 17:46:06.331759  213635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 17:46:06.337898  213635 kubeadm.go:392] StartCluster: {Name:old-k8s-version-768580 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-768580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 17:46:06.337991  213635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 17:46:06.338037  213635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:46:06.381282  213635 cri.go:89] found id: ""
	I0414 17:46:06.381351  213635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 17:46:06.392326  213635 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 17:46:06.392345  213635 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 17:46:06.392385  213635 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 17:46:06.402275  213635 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:46:06.403224  213635 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-768580" does not appear in /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:46:06.403594  213635 kubeconfig.go:62] /home/jenkins/minikube-integration/20349-149500/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-768580" cluster setting kubeconfig missing "old-k8s-version-768580" context setting]
	I0414 17:46:06.404086  213635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:46:06.460048  213635 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 17:46:06.470500  213635 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.58
	I0414 17:46:06.470535  213635 kubeadm.go:1160] stopping kube-system containers ...
	I0414 17:46:06.470546  213635 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 17:46:06.470624  213635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 17:46:06.509152  213635 cri.go:89] found id: ""
	I0414 17:46:06.509210  213635 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 17:46:06.526163  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:46:06.535901  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:46:06.535928  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:46:06.535978  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:46:06.545480  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:46:06.545535  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:46:06.554610  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:46:06.563294  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:46:06.563347  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:46:06.572284  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:46:06.581431  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:46:06.581475  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:46:06.591211  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:46:06.600340  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:46:06.600408  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:46:06.609494  213635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:46:06.618800  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:06.747191  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.478890  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.697670  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.793179  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 17:46:07.893891  213635 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:46:07.893971  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:03.762310  213406 pod_ready.go:103] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:04.762763  213406 pod_ready.go:93] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.762794  213406 pod_ready.go:82] duration metric: took 5.507135949s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.762808  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.767311  213406 pod_ready.go:93] pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.767329  213406 pod_ready.go:82] duration metric: took 4.514084ms for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.767337  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6dft2" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.771924  213406 pod_ready.go:93] pod "kube-proxy-6dft2" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.771944  213406 pod_ready.go:82] duration metric: took 4.599852ms for pod "kube-proxy-6dft2" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.771954  213406 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.776235  213406 pod_ready.go:93] pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:46:04.776251  213406 pod_ready.go:82] duration metric: took 4.290311ms for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:04.776264  213406 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace to be "Ready" ...
	I0414 17:46:06.782241  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:07.388743  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:09.886293  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:10.645757  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:12.646190  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:08.394410  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:08.895002  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.395022  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:10.394996  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:10.894824  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:11.394638  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:11.894428  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:12.394452  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:12.894017  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:09.281824  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:11.282179  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:11.886469  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:13.886515  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:15.146498  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:17.147156  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:13.394405  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:13.894519  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:14.394847  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:14.894997  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:15.394630  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:15.895007  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:16.394831  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:16.894632  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:17.395016  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:17.894993  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:13.783938  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:16.282525  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:16.387995  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:18.887504  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:19.645731  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:21.645945  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:18.394976  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:18.895068  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:19.394434  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:19.894886  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:20.395037  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:20.895061  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:21.394429  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:21.894500  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:22.394822  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:22.895080  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:18.782119  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:20.785464  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.281701  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:21.387824  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.886390  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:24.145922  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:26.645858  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:23.394953  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:23.894339  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:24.395018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:24.895037  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.394854  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.894984  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:26.395005  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:26.895007  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:27.395035  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:27.895034  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:25.282520  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:27.780903  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:26.386775  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.886919  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.646216  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:30.646635  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.146515  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:28.394580  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:28.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.394479  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.894485  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:30.394483  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:30.894471  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:31.395020  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:31.895014  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:32.395034  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:32.895028  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:29.782338  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:32.280971  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:31.389561  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.885891  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:35.646041  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.146195  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:33.394018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:33.894501  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.394226  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.894064  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:35.394952  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:35.895016  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:36.394607  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:36.895006  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:37.394673  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:37.894995  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:34.282968  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:36.781804  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:35.886870  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.385985  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:40.386210  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:40.646578  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.146373  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:38.394272  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:38.894875  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:39.394148  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:39.895036  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:40.394685  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:40.895010  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:41.394981  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:41.894634  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:42.394270  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:42.895029  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:38.783097  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:41.281604  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.281689  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:42.387307  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:44.885815  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:45.646331  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.146832  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:43.394362  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:43.894756  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:44.395057  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:44.895022  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.394470  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.894701  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:46.395033  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:46.895033  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:47.394321  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:47.895018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:45.781213  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:47.782055  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:46.886132  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.887731  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:50.646089  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:52.646393  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:48.394554  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:48.894703  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.394432  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.894498  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:50.395063  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:50.894449  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:51.395000  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:51.895026  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:52.394891  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:52.894471  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:49.782883  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:52.282500  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:51.386370  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:53.387056  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:55.387096  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:55.145864  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:57.145973  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:53.394778  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:53.894664  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.394089  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.894622  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:55.394495  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:55.894999  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:56.395001  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:56.894095  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:57.394283  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:57.894977  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:54.282957  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:56.781374  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:57.887077  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:00.386841  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:59.146801  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:01.645801  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:46:58.394681  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:58.895019  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:59.394738  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:59.894984  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:00.394802  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:00.894854  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:01.395049  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:01.895019  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:02.394977  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:02.894501  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:46:58.782051  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:00.782255  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:02.782525  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:02.886126  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:04.886471  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:03.646142  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:06.146967  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:03.394365  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:03.895039  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:04.395027  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:04.894987  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:05.394716  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:05.894080  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:06.394955  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:06.894670  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:07.394902  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:07.894929  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:07.895008  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:07.936773  213635 cri.go:89] found id: ""
	I0414 17:47:07.936809  213635 logs.go:282] 0 containers: []
	W0414 17:47:07.936822  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:07.936830  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:07.936908  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:07.971073  213635 cri.go:89] found id: ""
	I0414 17:47:07.971104  213635 logs.go:282] 0 containers: []
	W0414 17:47:07.971113  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:07.971118  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:07.971171  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:08.010389  213635 cri.go:89] found id: ""
	I0414 17:47:08.010414  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.010422  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:08.010427  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:08.010482  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:08.044286  213635 cri.go:89] found id: ""
	I0414 17:47:08.044322  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.044334  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:08.044344  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:08.044413  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:08.079985  213635 cri.go:89] found id: ""
	I0414 17:47:08.080008  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.080016  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:08.080021  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:08.080071  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:08.119431  213635 cri.go:89] found id: ""
	I0414 17:47:08.119456  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.119468  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:08.119474  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:08.119529  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:08.152203  213635 cri.go:89] found id: ""
	I0414 17:47:08.152227  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.152234  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:08.152239  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:08.152287  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:08.187035  213635 cri.go:89] found id: ""
	I0414 17:47:08.187064  213635 logs.go:282] 0 containers: []
	W0414 17:47:08.187075  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:08.187092  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:08.187106  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0414 17:47:05.283544  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:07.781984  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:06.887145  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:09.386391  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:08.645957  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:10.646258  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:13.147462  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	W0414 17:47:08.312274  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:08.312301  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:08.312315  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:08.382714  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:08.382745  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:08.421561  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:08.421588  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:08.476855  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:08.476891  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:10.991104  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:11.004501  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:11.004575  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:11.039060  213635 cri.go:89] found id: ""
	I0414 17:47:11.039086  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.039094  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:11.039099  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:11.039145  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:11.073857  213635 cri.go:89] found id: ""
	I0414 17:47:11.073883  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.073890  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:11.073896  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:11.073942  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:11.106411  213635 cri.go:89] found id: ""
	I0414 17:47:11.106436  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.106493  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:11.106505  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:11.106550  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:11.145377  213635 cri.go:89] found id: ""
	I0414 17:47:11.145406  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.145416  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:11.145423  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:11.145481  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:11.178621  213635 cri.go:89] found id: ""
	I0414 17:47:11.178650  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.178661  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:11.178668  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:11.178731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:11.212798  213635 cri.go:89] found id: ""
	I0414 17:47:11.212832  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.212840  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:11.212846  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:11.212902  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:11.258553  213635 cri.go:89] found id: ""
	I0414 17:47:11.258576  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.258584  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:11.258589  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:11.258637  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:11.318616  213635 cri.go:89] found id: ""
	I0414 17:47:11.318658  213635 logs.go:282] 0 containers: []
	W0414 17:47:11.318669  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:11.318680  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:11.318695  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:11.381468  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:11.381500  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:11.395975  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:11.395999  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:11.468932  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:11.468954  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:11.468971  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:11.547431  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:11.547464  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:10.281538  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:12.284013  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:11.386803  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:13.387771  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:15.645939  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:17.647578  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:14.089096  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:14.105644  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:14.105710  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:14.139763  213635 cri.go:89] found id: ""
	I0414 17:47:14.139791  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.139798  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:14.139804  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:14.139866  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:14.174571  213635 cri.go:89] found id: ""
	I0414 17:47:14.174594  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.174600  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:14.174605  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:14.174659  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:14.208140  213635 cri.go:89] found id: ""
	I0414 17:47:14.208164  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.208171  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:14.208177  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:14.208233  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:14.240906  213635 cri.go:89] found id: ""
	I0414 17:47:14.240940  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.240952  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:14.240959  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:14.241023  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:14.273549  213635 cri.go:89] found id: ""
	I0414 17:47:14.273581  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.273593  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:14.273599  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:14.273652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:14.308758  213635 cri.go:89] found id: ""
	I0414 17:47:14.308791  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.308798  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:14.308805  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:14.308868  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:14.343464  213635 cri.go:89] found id: ""
	I0414 17:47:14.343492  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.343503  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:14.343510  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:14.343571  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:14.377456  213635 cri.go:89] found id: ""
	I0414 17:47:14.377483  213635 logs.go:282] 0 containers: []
	W0414 17:47:14.377493  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:14.377503  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:14.377517  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:14.428031  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:14.428059  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:14.441682  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:14.441706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:14.511433  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:14.511456  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:14.511470  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:14.591334  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:14.591373  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:17.131067  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:17.150199  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:17.150257  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:17.195868  213635 cri.go:89] found id: ""
	I0414 17:47:17.195895  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.195902  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:17.195909  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:17.195968  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:17.248530  213635 cri.go:89] found id: ""
	I0414 17:47:17.248562  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.248573  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:17.248600  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:17.248664  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:17.302561  213635 cri.go:89] found id: ""
	I0414 17:47:17.302592  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.302603  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:17.302611  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:17.302676  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:17.337154  213635 cri.go:89] found id: ""
	I0414 17:47:17.337185  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.337196  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:17.337204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:17.337262  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:17.372117  213635 cri.go:89] found id: ""
	I0414 17:47:17.372142  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.372149  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:17.372154  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:17.372209  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:17.409162  213635 cri.go:89] found id: ""
	I0414 17:47:17.409190  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.409199  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:17.409204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:17.409253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:17.444609  213635 cri.go:89] found id: ""
	I0414 17:47:17.444636  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.444652  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:17.444660  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:17.444721  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:17.484188  213635 cri.go:89] found id: ""
	I0414 17:47:17.484216  213635 logs.go:282] 0 containers: []
	W0414 17:47:17.484226  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:17.484238  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:17.484252  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:17.523203  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:17.523249  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:17.573785  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:17.573818  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:17.586989  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:17.587014  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:17.659369  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:17.659392  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:17.659408  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:14.781454  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:16.782152  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:15.888032  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:18.387319  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.147048  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:22.646239  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.241973  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:20.255211  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:20.255288  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:20.292821  213635 cri.go:89] found id: ""
	I0414 17:47:20.292854  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.292866  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:20.292873  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:20.292933  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:20.331101  213635 cri.go:89] found id: ""
	I0414 17:47:20.331150  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.331162  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:20.331169  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:20.331247  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:20.369990  213635 cri.go:89] found id: ""
	I0414 17:47:20.370015  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.370022  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:20.370027  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:20.370096  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:20.406805  213635 cri.go:89] found id: ""
	I0414 17:47:20.406836  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.406846  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:20.406852  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:20.406913  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:20.442314  213635 cri.go:89] found id: ""
	I0414 17:47:20.442340  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.442348  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:20.442353  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:20.442413  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:20.476588  213635 cri.go:89] found id: ""
	I0414 17:47:20.476617  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.476627  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:20.476634  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:20.476695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:20.510731  213635 cri.go:89] found id: ""
	I0414 17:47:20.510782  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.510821  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:20.510833  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:20.510906  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:20.545219  213635 cri.go:89] found id: ""
	I0414 17:47:20.545244  213635 logs.go:282] 0 containers: []
	W0414 17:47:20.545255  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:20.545277  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:20.545292  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:20.583147  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:20.583180  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:20.636347  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:20.636382  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:20.650452  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:20.650477  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:20.722784  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:20.722811  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:20.722828  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:19.282759  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:21.782197  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:20.886279  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:22.886745  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:24.886852  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:25.145867  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:27.146656  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:23.298966  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:23.312159  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:23.312251  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:23.353883  213635 cri.go:89] found id: ""
	I0414 17:47:23.353907  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.353915  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:23.353921  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:23.354005  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:23.391644  213635 cri.go:89] found id: ""
	I0414 17:47:23.391671  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.391680  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:23.391688  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:23.391732  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:23.427612  213635 cri.go:89] found id: ""
	I0414 17:47:23.427644  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.427652  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:23.427658  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:23.427719  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:23.463296  213635 cri.go:89] found id: ""
	I0414 17:47:23.463324  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.463335  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:23.463344  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:23.463408  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:23.497377  213635 cri.go:89] found id: ""
	I0414 17:47:23.497407  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.497418  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:23.497426  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:23.497487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:23.534162  213635 cri.go:89] found id: ""
	I0414 17:47:23.534209  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.534222  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:23.534229  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:23.534299  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:23.574494  213635 cri.go:89] found id: ""
	I0414 17:47:23.574524  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.574535  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:23.574542  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:23.574611  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:23.612210  213635 cri.go:89] found id: ""
	I0414 17:47:23.612265  213635 logs.go:282] 0 containers: []
	W0414 17:47:23.612279  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:23.612289  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:23.612304  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:23.689765  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:23.689802  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:23.725675  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:23.725709  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:23.778002  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:23.778031  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:23.793019  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:23.793052  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:23.866451  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:26.367039  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:26.381917  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:26.381987  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:26.416638  213635 cri.go:89] found id: ""
	I0414 17:47:26.416661  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.416668  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:26.416674  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:26.416721  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:26.458324  213635 cri.go:89] found id: ""
	I0414 17:47:26.458349  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.458360  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:26.458367  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:26.458423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:26.493044  213635 cri.go:89] found id: ""
	I0414 17:47:26.493096  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.493109  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:26.493116  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:26.493181  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:26.527654  213635 cri.go:89] found id: ""
	I0414 17:47:26.527690  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.527702  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:26.527709  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:26.527769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:26.565607  213635 cri.go:89] found id: ""
	I0414 17:47:26.565633  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.565639  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:26.565645  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:26.565692  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:26.598157  213635 cri.go:89] found id: ""
	I0414 17:47:26.598186  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.598196  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:26.598204  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:26.598264  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:26.631534  213635 cri.go:89] found id: ""
	I0414 17:47:26.631572  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.631581  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:26.631586  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:26.631652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:26.669109  213635 cri.go:89] found id: ""
	I0414 17:47:26.669134  213635 logs.go:282] 0 containers: []
	W0414 17:47:26.669145  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:26.669155  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:26.669169  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:26.722048  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:26.722075  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:26.735141  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:26.735160  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:26.808950  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:26.808979  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:26.808996  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:26.896662  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:26.896693  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:23.785953  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:26.284260  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:27.386201  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.386726  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.146828  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.646619  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:29.440079  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:29.454761  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:29.454837  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:29.488451  213635 cri.go:89] found id: ""
	I0414 17:47:29.488480  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.488491  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:29.488499  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:29.488548  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:29.520861  213635 cri.go:89] found id: ""
	I0414 17:47:29.520891  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.520902  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:29.520908  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:29.520963  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:29.557913  213635 cri.go:89] found id: ""
	I0414 17:47:29.557939  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.557949  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:29.557956  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:29.558013  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:29.596839  213635 cri.go:89] found id: ""
	I0414 17:47:29.596878  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.596889  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:29.596896  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:29.596959  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:29.631746  213635 cri.go:89] found id: ""
	I0414 17:47:29.631779  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.631789  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:29.631797  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:29.631864  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:29.667006  213635 cri.go:89] found id: ""
	I0414 17:47:29.667034  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.667048  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:29.667055  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:29.667111  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:29.700458  213635 cri.go:89] found id: ""
	I0414 17:47:29.700490  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.700500  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:29.700507  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:29.700569  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:29.736776  213635 cri.go:89] found id: ""
	I0414 17:47:29.736804  213635 logs.go:282] 0 containers: []
	W0414 17:47:29.736814  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:29.736825  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:29.736840  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:29.776831  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:29.776871  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:29.830601  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:29.830632  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:29.844366  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:29.844396  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:29.920571  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:29.920595  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:29.920611  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:32.502415  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:32.516740  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:32.516806  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:32.551360  213635 cri.go:89] found id: ""
	I0414 17:47:32.551380  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.551387  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:32.551393  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:32.551440  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:32.588757  213635 cri.go:89] found id: ""
	I0414 17:47:32.588785  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.588795  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:32.588802  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:32.588869  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:32.622369  213635 cri.go:89] found id: ""
	I0414 17:47:32.622394  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.622405  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:32.622413  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:32.622473  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:32.658310  213635 cri.go:89] found id: ""
	I0414 17:47:32.658334  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.658343  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:32.658350  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:32.658408  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:32.692724  213635 cri.go:89] found id: ""
	I0414 17:47:32.692756  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.692768  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:32.692776  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:32.692836  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:32.729086  213635 cri.go:89] found id: ""
	I0414 17:47:32.729113  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.729121  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:32.729127  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:32.729182  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:32.761853  213635 cri.go:89] found id: ""
	I0414 17:47:32.761878  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.761886  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:32.761891  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:32.761937  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:32.794906  213635 cri.go:89] found id: ""
	I0414 17:47:32.794931  213635 logs.go:282] 0 containers: []
	W0414 17:47:32.794938  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:32.794945  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:32.794956  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:32.876985  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:32.877027  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:32.915184  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:32.915210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:32.965418  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:32.965449  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:32.978245  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:32.978270  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:33.046592  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:28.782031  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.281960  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:33.283783  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:31.885919  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:34.385966  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:34.146066  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:36.645902  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:35.547721  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:35.562729  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:35.562794  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:35.600323  213635 cri.go:89] found id: ""
	I0414 17:47:35.600353  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.600365  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:35.600374  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:35.600426  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:35.639091  213635 cri.go:89] found id: ""
	I0414 17:47:35.639116  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.639124  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:35.639130  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:35.639185  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:35.674709  213635 cri.go:89] found id: ""
	I0414 17:47:35.674743  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.674755  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:35.674763  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:35.674825  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:35.712316  213635 cri.go:89] found id: ""
	I0414 17:47:35.712340  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.712347  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:35.712353  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:35.712399  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:35.746497  213635 cri.go:89] found id: ""
	I0414 17:47:35.746525  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.746535  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:35.746542  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:35.746611  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:35.787414  213635 cri.go:89] found id: ""
	I0414 17:47:35.787436  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.787445  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:35.787460  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:35.787514  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:35.818830  213635 cri.go:89] found id: ""
	I0414 17:47:35.818857  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.818867  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:35.818874  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:35.818938  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:35.854020  213635 cri.go:89] found id: ""
	I0414 17:47:35.854048  213635 logs.go:282] 0 containers: []
	W0414 17:47:35.854059  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:35.854082  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:35.854095  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:35.907502  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:35.907530  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:35.922223  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:35.922248  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:35.992058  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:35.992085  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:35.992101  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:36.070377  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:36.070413  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:35.782944  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.283160  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:36.388560  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.886997  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.647280  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:41.146882  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:38.612483  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:38.625570  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:38.625639  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:38.664060  213635 cri.go:89] found id: ""
	I0414 17:47:38.664084  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.664104  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:38.664112  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:38.664168  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:38.698505  213635 cri.go:89] found id: ""
	I0414 17:47:38.698535  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.698546  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:38.698553  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:38.698614  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:38.735113  213635 cri.go:89] found id: ""
	I0414 17:47:38.735142  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.735153  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:38.735161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:38.735229  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:38.773173  213635 cri.go:89] found id: ""
	I0414 17:47:38.773203  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.773211  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:38.773216  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:38.773270  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:38.807136  213635 cri.go:89] found id: ""
	I0414 17:47:38.807167  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.807178  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:38.807186  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:38.807244  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:38.844350  213635 cri.go:89] found id: ""
	I0414 17:47:38.844375  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.844384  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:38.844392  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:38.844445  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:38.879565  213635 cri.go:89] found id: ""
	I0414 17:47:38.879587  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.879594  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:38.879599  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:38.879658  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:38.916412  213635 cri.go:89] found id: ""
	I0414 17:47:38.916449  213635 logs.go:282] 0 containers: []
	W0414 17:47:38.916457  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:38.916465  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:38.916475  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:38.953944  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:38.953972  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:39.004989  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:39.005019  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:39.018618  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:39.018640  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:39.091095  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:39.091122  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:39.091136  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:41.675012  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:41.689023  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:41.689085  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:41.722675  213635 cri.go:89] found id: ""
	I0414 17:47:41.722698  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.722707  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:41.722715  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:41.722774  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:41.757787  213635 cri.go:89] found id: ""
	I0414 17:47:41.757808  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.757815  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:41.757822  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:41.757895  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:41.792938  213635 cri.go:89] found id: ""
	I0414 17:47:41.792970  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.792981  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:41.792990  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:41.793060  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:41.826121  213635 cri.go:89] found id: ""
	I0414 17:47:41.826145  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.826153  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:41.826158  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:41.826206  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:41.862687  213635 cri.go:89] found id: ""
	I0414 17:47:41.862717  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.862728  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:41.862735  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:41.862810  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:41.901905  213635 cri.go:89] found id: ""
	I0414 17:47:41.901935  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.901945  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:41.901953  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:41.902010  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:41.936560  213635 cri.go:89] found id: ""
	I0414 17:47:41.936591  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.936602  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:41.936609  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:41.936673  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:41.968609  213635 cri.go:89] found id: ""
	I0414 17:47:41.968640  213635 logs.go:282] 0 containers: []
	W0414 17:47:41.968651  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:41.968663  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:41.968677  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:42.037691  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:42.037725  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:42.037742  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:42.123173  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:42.123222  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:42.164982  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:42.165018  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:42.217567  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:42.217601  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:40.283210  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:42.286058  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:40.887506  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:43.387362  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:43.646155  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:46.145968  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:48.147182  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:44.733645  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:44.748083  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:44.748144  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:44.782103  213635 cri.go:89] found id: ""
	I0414 17:47:44.782131  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.782141  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:44.782148  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:44.782200  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:44.825594  213635 cri.go:89] found id: ""
	I0414 17:47:44.825640  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.825652  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:44.825659  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:44.825719  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:44.858967  213635 cri.go:89] found id: ""
	I0414 17:47:44.859000  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.859017  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:44.859024  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:44.859088  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:44.892965  213635 cri.go:89] found id: ""
	I0414 17:47:44.892990  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.892999  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:44.893007  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:44.893073  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:44.926983  213635 cri.go:89] found id: ""
	I0414 17:47:44.927007  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.927014  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:44.927019  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:44.927066  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:44.961406  213635 cri.go:89] found id: ""
	I0414 17:47:44.961459  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.961471  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:44.961478  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:44.961540  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:44.996262  213635 cri.go:89] found id: ""
	I0414 17:47:44.996287  213635 logs.go:282] 0 containers: []
	W0414 17:47:44.996296  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:44.996304  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:44.996368  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:45.029476  213635 cri.go:89] found id: ""
	I0414 17:47:45.029507  213635 logs.go:282] 0 containers: []
	W0414 17:47:45.029518  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:45.029529  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:45.029543  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:45.100081  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:45.100110  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:45.100122  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:45.179286  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:45.179319  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:45.220129  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:45.220166  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:45.275257  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:45.275292  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:47.792170  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:47.805709  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:47.805769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:47.842023  213635 cri.go:89] found id: ""
	I0414 17:47:47.842050  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.842058  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:47.842063  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:47.842118  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:47.884228  213635 cri.go:89] found id: ""
	I0414 17:47:47.884260  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.884271  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:47.884278  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:47.884338  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:47.924093  213635 cri.go:89] found id: ""
	I0414 17:47:47.924121  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.924130  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:47.924137  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:47.924193  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:47.965378  213635 cri.go:89] found id: ""
	I0414 17:47:47.965406  213635 logs.go:282] 0 containers: []
	W0414 17:47:47.965416  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:47.965423  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:47.965538  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:48.003136  213635 cri.go:89] found id: ""
	I0414 17:47:48.003165  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.003178  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:48.003187  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:48.003253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:48.042729  213635 cri.go:89] found id: ""
	I0414 17:47:48.042758  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.042768  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:48.042774  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:48.042837  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:48.077654  213635 cri.go:89] found id: ""
	I0414 17:47:48.077682  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.077692  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:48.077699  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:48.077749  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:48.109967  213635 cri.go:89] found id: ""
	I0414 17:47:48.109991  213635 logs.go:282] 0 containers: []
	W0414 17:47:48.109998  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:48.110006  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:48.110017  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:48.125245  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:48.125277  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:48.194705  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:48.194725  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:48.194738  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:44.783825  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:47.283708  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:45.886120  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:47.886616  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:50.387382  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:50.646377  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:53.145406  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:48.287160  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:48.287196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:48.335515  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:48.335547  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:50.892108  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:50.905172  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:50.905234  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:50.940079  213635 cri.go:89] found id: ""
	I0414 17:47:50.940104  213635 logs.go:282] 0 containers: []
	W0414 17:47:50.940111  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:50.940116  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:50.940176  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:50.973887  213635 cri.go:89] found id: ""
	I0414 17:47:50.973912  213635 logs.go:282] 0 containers: []
	W0414 17:47:50.973919  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:50.973926  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:50.973982  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:51.012547  213635 cri.go:89] found id: ""
	I0414 17:47:51.012568  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.012577  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:51.012584  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:51.012640  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:51.053157  213635 cri.go:89] found id: ""
	I0414 17:47:51.053180  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.053188  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:51.053196  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:51.053249  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:51.110289  213635 cri.go:89] found id: ""
	I0414 17:47:51.110319  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.110330  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:51.110337  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:51.110393  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:51.144361  213635 cri.go:89] found id: ""
	I0414 17:47:51.144383  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.144394  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:51.144402  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:51.144530  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:51.177527  213635 cri.go:89] found id: ""
	I0414 17:47:51.177563  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.177571  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:51.177576  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:51.177636  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:51.210869  213635 cri.go:89] found id: ""
	I0414 17:47:51.210891  213635 logs.go:282] 0 containers: []
	W0414 17:47:51.210899  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:51.210907  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:51.210918  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:51.247291  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:51.247317  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:51.299677  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:51.299706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:51.313384  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:51.313409  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:51.388212  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:51.388239  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:51.388254  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:49.781341  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:51.782513  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:52.886676  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:55.386338  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:55.145724  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:57.146515  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:53.976114  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:53.989051  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:53.989115  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:54.023756  213635 cri.go:89] found id: ""
	I0414 17:47:54.023788  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.023799  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:54.023805  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:54.023869  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:54.061807  213635 cri.go:89] found id: ""
	I0414 17:47:54.061853  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.061865  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:54.061872  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:54.061930  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:54.095835  213635 cri.go:89] found id: ""
	I0414 17:47:54.095878  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.095890  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:54.095897  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:54.096006  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:54.131513  213635 cri.go:89] found id: ""
	I0414 17:47:54.131535  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.131543  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:54.131548  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:54.131594  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:54.171002  213635 cri.go:89] found id: ""
	I0414 17:47:54.171024  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.171031  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:54.171037  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:54.171095  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:54.206779  213635 cri.go:89] found id: ""
	I0414 17:47:54.206801  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.206808  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:54.206818  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:54.206876  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:54.252485  213635 cri.go:89] found id: ""
	I0414 17:47:54.252533  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.252547  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:54.252555  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:54.252628  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:54.290628  213635 cri.go:89] found id: ""
	I0414 17:47:54.290656  213635 logs.go:282] 0 containers: []
	W0414 17:47:54.290667  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:54.290676  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:54.290689  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:54.364000  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:54.364020  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:54.364032  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:54.446117  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:54.446152  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:54.488749  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:54.488775  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:54.540890  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:54.540922  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:57.055546  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:47:57.069362  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:47:57.069420  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:47:57.112914  213635 cri.go:89] found id: ""
	I0414 17:47:57.112942  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.112949  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:47:57.112955  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:47:57.113002  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:57.149533  213635 cri.go:89] found id: ""
	I0414 17:47:57.149553  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.149560  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:47:57.149565  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:47:57.149622  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:47:57.184595  213635 cri.go:89] found id: ""
	I0414 17:47:57.184624  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.184632  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:47:57.184637  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:47:57.184683  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:47:57.219904  213635 cri.go:89] found id: ""
	I0414 17:47:57.219931  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.219942  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:47:57.219949  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:47:57.220008  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:47:57.255709  213635 cri.go:89] found id: ""
	I0414 17:47:57.255736  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.255745  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:47:57.255750  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:47:57.255809  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:47:57.289390  213635 cri.go:89] found id: ""
	I0414 17:47:57.289413  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.289419  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:47:57.289425  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:47:57.289474  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:47:57.329950  213635 cri.go:89] found id: ""
	I0414 17:47:57.329972  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.329978  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:47:57.329983  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:47:57.330028  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:47:57.365856  213635 cri.go:89] found id: ""
	I0414 17:47:57.365888  213635 logs.go:282] 0 containers: []
	W0414 17:47:57.365901  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:47:57.365911  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:47:57.365925  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:47:57.378637  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:47:57.378661  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:47:57.446639  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:47:57.446662  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:47:57.446676  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:47:57.536049  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:47:57.536086  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:47:57.585473  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:47:57.585506  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:47:53.782957  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:56.286401  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:57.387720  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:59.886486  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:47:59.647389  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:02.147002  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:00.135711  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:00.151060  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:00.151131  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:00.184972  213635 cri.go:89] found id: ""
	I0414 17:48:00.185005  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.185016  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:00.185023  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:00.185088  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:00.218051  213635 cri.go:89] found id: ""
	I0414 17:48:00.218085  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.218093  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:00.218099  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:00.218156  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:00.251291  213635 cri.go:89] found id: ""
	I0414 17:48:00.251318  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.251325  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:00.251331  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:00.251392  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:00.291683  213635 cri.go:89] found id: ""
	I0414 17:48:00.291706  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.291713  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:00.291718  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:00.291765  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:00.329316  213635 cri.go:89] found id: ""
	I0414 17:48:00.329342  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.329350  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:00.329356  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:00.329409  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:00.364819  213635 cri.go:89] found id: ""
	I0414 17:48:00.364848  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.364856  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:00.364861  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:00.364905  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:00.404928  213635 cri.go:89] found id: ""
	I0414 17:48:00.404961  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.404971  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:00.404978  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:00.405040  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:00.439708  213635 cri.go:89] found id: ""
	I0414 17:48:00.439739  213635 logs.go:282] 0 containers: []
	W0414 17:48:00.439750  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:00.439761  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:00.439776  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:00.479252  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:00.479285  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:00.533545  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:00.533576  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:00.546920  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:00.546952  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:00.614440  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:00.614461  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:00.614476  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:03.197930  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:03.212912  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:03.212973  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:03.272435  213635 cri.go:89] found id: ""
	I0414 17:48:03.272467  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.272479  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:03.272487  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:03.272554  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:47:58.781206  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:00.781677  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.286395  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:01.886559  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.887796  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:04.147694  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:06.647249  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:03.336351  213635 cri.go:89] found id: ""
	I0414 17:48:03.336373  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.336380  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:03.336386  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:03.336430  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:03.370368  213635 cri.go:89] found id: ""
	I0414 17:48:03.370398  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.370408  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:03.370422  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:03.370475  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:03.408402  213635 cri.go:89] found id: ""
	I0414 17:48:03.408429  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.408436  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:03.408442  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:03.408491  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:03.442912  213635 cri.go:89] found id: ""
	I0414 17:48:03.442939  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.442950  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:03.442957  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:03.443019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:03.479439  213635 cri.go:89] found id: ""
	I0414 17:48:03.479467  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.479476  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:03.479481  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:03.479544  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:03.517971  213635 cri.go:89] found id: ""
	I0414 17:48:03.517993  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.518000  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:03.518005  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:03.518059  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:03.556177  213635 cri.go:89] found id: ""
	I0414 17:48:03.556208  213635 logs.go:282] 0 containers: []
	W0414 17:48:03.556216  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:03.556224  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:03.556237  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:03.594142  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:03.594167  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:03.644688  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:03.644718  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:03.658140  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:03.658164  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:03.729627  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:03.729649  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:03.729663  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:06.309939  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:06.323927  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:06.323990  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:06.364388  213635 cri.go:89] found id: ""
	I0414 17:48:06.364412  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.364426  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:06.364431  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:06.364477  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:06.398800  213635 cri.go:89] found id: ""
	I0414 17:48:06.398821  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.398828  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:06.398833  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:06.398885  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:06.442842  213635 cri.go:89] found id: ""
	I0414 17:48:06.442873  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.442884  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:06.442891  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:06.442973  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:06.485910  213635 cri.go:89] found id: ""
	I0414 17:48:06.485945  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.485955  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:06.485962  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:06.486023  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:06.520624  213635 cri.go:89] found id: ""
	I0414 17:48:06.520656  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.520668  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:06.520675  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:06.520741  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:06.555790  213635 cri.go:89] found id: ""
	I0414 17:48:06.555833  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.555845  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:06.555853  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:06.555916  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:06.589144  213635 cri.go:89] found id: ""
	I0414 17:48:06.589166  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.589173  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:06.589177  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:06.589223  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:06.623771  213635 cri.go:89] found id: ""
	I0414 17:48:06.623808  213635 logs.go:282] 0 containers: []
	W0414 17:48:06.623824  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:06.623833  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:06.623843  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:06.679003  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:06.679039  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:06.695303  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:06.695328  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:06.770562  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:06.770585  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:06.770597  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:06.850617  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:06.850652  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:05.782269  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:07.783336  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:06.387181  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:08.886322  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:09.145702  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:11.147099  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:09.390500  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:09.403827  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:09.403885  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:09.438395  213635 cri.go:89] found id: ""
	I0414 17:48:09.438420  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.438428  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:09.438434  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:09.438484  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:09.473071  213635 cri.go:89] found id: ""
	I0414 17:48:09.473098  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.473106  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:09.473112  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:09.473159  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:09.506175  213635 cri.go:89] found id: ""
	I0414 17:48:09.506205  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.506216  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:09.506223  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:09.506272  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:09.540488  213635 cri.go:89] found id: ""
	I0414 17:48:09.540511  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.540518  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:09.540523  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:09.540583  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:09.576189  213635 cri.go:89] found id: ""
	I0414 17:48:09.576222  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.576233  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:09.576241  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:09.576302  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:09.607908  213635 cri.go:89] found id: ""
	I0414 17:48:09.607937  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.607945  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:09.607950  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:09.608000  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:09.642069  213635 cri.go:89] found id: ""
	I0414 17:48:09.642098  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.642108  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:09.642115  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:09.642177  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:09.675434  213635 cri.go:89] found id: ""
	I0414 17:48:09.675463  213635 logs.go:282] 0 containers: []
	W0414 17:48:09.675474  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:09.675484  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:09.675496  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:09.754118  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:09.754154  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:09.797336  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:09.797373  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:09.849366  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:09.849407  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:09.863427  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:09.863458  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:09.934735  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:12.435482  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:12.449310  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:12.449374  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:12.484115  213635 cri.go:89] found id: ""
	I0414 17:48:12.484143  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.484153  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:12.484160  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:12.484213  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:12.521972  213635 cri.go:89] found id: ""
	I0414 17:48:12.521994  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.522001  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:12.522012  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:12.522071  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:12.554192  213635 cri.go:89] found id: ""
	I0414 17:48:12.554219  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.554229  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:12.554237  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:12.554296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:12.587420  213635 cri.go:89] found id: ""
	I0414 17:48:12.587450  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.587460  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:12.587467  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:12.587526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:12.621562  213635 cri.go:89] found id: ""
	I0414 17:48:12.621588  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.621599  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:12.621608  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:12.621672  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:12.660123  213635 cri.go:89] found id: ""
	I0414 17:48:12.660147  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.660155  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:12.660160  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:12.660216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:12.693979  213635 cri.go:89] found id: ""
	I0414 17:48:12.694010  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.694021  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:12.694029  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:12.694097  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:12.728017  213635 cri.go:89] found id: ""
	I0414 17:48:12.728043  213635 logs.go:282] 0 containers: []
	W0414 17:48:12.728051  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:12.728060  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:12.728072  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:12.782896  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:12.782927  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:12.795655  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:12.795679  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:12.865150  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:12.865183  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:12.865197  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:12.950645  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:12.950682  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:10.285784  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:12.781397  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:10.886362  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:12.888044  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:15.386245  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:13.646393  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:16.146335  212456 pod_ready.go:103] pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:16.640867  212456 pod_ready.go:82] duration metric: took 4m0.000569834s for pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace to be "Ready" ...
	E0414 17:48:16.640896  212456 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-7s74z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0414 17:48:16.640935  212456 pod_ready.go:39] duration metric: took 4m12.70748193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:16.640979  212456 kubeadm.go:597] duration metric: took 4m20.79960225s to restartPrimaryControlPlane
	W0414 17:48:16.641051  212456 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:48:16.641091  212456 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:48:15.490793  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:15.504867  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:15.504941  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:15.538968  213635 cri.go:89] found id: ""
	I0414 17:48:15.538990  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.538998  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:15.539003  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:15.539049  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:15.573937  213635 cri.go:89] found id: ""
	I0414 17:48:15.573961  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.573968  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:15.573973  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:15.574019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:15.609320  213635 cri.go:89] found id: ""
	I0414 17:48:15.609346  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.609360  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:15.609367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:15.609425  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:15.641598  213635 cri.go:89] found id: ""
	I0414 17:48:15.641626  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.641635  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:15.641641  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:15.641695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:15.675213  213635 cri.go:89] found id: ""
	I0414 17:48:15.675239  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.675248  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:15.675255  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:15.675313  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:15.710542  213635 cri.go:89] found id: ""
	I0414 17:48:15.710565  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.710572  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:15.710578  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:15.710623  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:15.745699  213635 cri.go:89] found id: ""
	I0414 17:48:15.745724  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.745735  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:15.745742  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:15.745792  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:15.782559  213635 cri.go:89] found id: ""
	I0414 17:48:15.782586  213635 logs.go:282] 0 containers: []
	W0414 17:48:15.782596  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:15.782605  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:15.782619  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:15.837926  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:15.837964  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:15.854293  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:15.854333  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:15.944741  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:15.944761  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:15.944773  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:16.032687  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:16.032716  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:14.784926  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:17.280964  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:17.886293  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:20.386161  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:18.574911  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:18.589009  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:18.589060  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:18.625705  213635 cri.go:89] found id: ""
	I0414 17:48:18.625730  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.625738  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:18.625743  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:18.625796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:18.659670  213635 cri.go:89] found id: ""
	I0414 17:48:18.659704  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.659713  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:18.659719  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:18.659762  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:18.694973  213635 cri.go:89] found id: ""
	I0414 17:48:18.694997  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.695005  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:18.695011  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:18.695083  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:18.733777  213635 cri.go:89] found id: ""
	I0414 17:48:18.733801  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.733808  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:18.733813  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:18.733881  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:18.765747  213635 cri.go:89] found id: ""
	I0414 17:48:18.765768  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.765775  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:18.765780  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:18.765856  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:18.799558  213635 cri.go:89] found id: ""
	I0414 17:48:18.799585  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.799595  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:18.799601  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:18.799653  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:18.835245  213635 cri.go:89] found id: ""
	I0414 17:48:18.835279  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.835291  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:18.835300  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:18.835354  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:18.870176  213635 cri.go:89] found id: ""
	I0414 17:48:18.870201  213635 logs.go:282] 0 containers: []
	W0414 17:48:18.870212  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:18.870222  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:18.870236  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:18.883166  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:18.883195  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:18.946103  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:18.946128  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:18.946145  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:19.023462  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:19.023496  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:19.067254  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:19.067281  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:21.619412  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:21.635163  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:21.635233  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:21.671680  213635 cri.go:89] found id: ""
	I0414 17:48:21.671705  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.671713  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:21.671719  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:21.671767  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:21.709955  213635 cri.go:89] found id: ""
	I0414 17:48:21.709987  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.709998  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:21.710005  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:21.710064  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:21.743179  213635 cri.go:89] found id: ""
	I0414 17:48:21.743202  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.743209  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:21.743214  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:21.743267  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:21.775835  213635 cri.go:89] found id: ""
	I0414 17:48:21.775862  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.775870  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:21.775875  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:21.775920  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:21.810164  213635 cri.go:89] found id: ""
	I0414 17:48:21.810190  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.810201  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:21.810207  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:21.810253  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:21.848616  213635 cri.go:89] found id: ""
	I0414 17:48:21.848639  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.848646  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:21.848651  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:21.848717  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:21.887985  213635 cri.go:89] found id: ""
	I0414 17:48:21.888014  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.888024  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:21.888030  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:21.888076  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:21.927965  213635 cri.go:89] found id: ""
	I0414 17:48:21.927992  213635 logs.go:282] 0 containers: []
	W0414 17:48:21.928003  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:21.928013  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:21.928028  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:21.989253  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:21.989294  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:22.003399  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:22.003429  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:22.071849  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:22.071872  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:22.071889  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:22.149857  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:22.149888  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:19.283105  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:21.782995  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:22.388207  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:24.886911  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:24.691531  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:24.706169  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:24.706230  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:24.745747  213635 cri.go:89] found id: ""
	I0414 17:48:24.745780  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.745791  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:24.745799  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:24.745886  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:24.785261  213635 cri.go:89] found id: ""
	I0414 17:48:24.785284  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.785291  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:24.785296  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:24.785351  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:24.824491  213635 cri.go:89] found id: ""
	I0414 17:48:24.824525  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.824536  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:24.824550  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:24.824606  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:24.868655  213635 cri.go:89] found id: ""
	I0414 17:48:24.868683  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.868696  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:24.868704  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:24.868769  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:24.910959  213635 cri.go:89] found id: ""
	I0414 17:48:24.910982  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.910989  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:24.910995  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:24.911053  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:24.944036  213635 cri.go:89] found id: ""
	I0414 17:48:24.944065  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.944073  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:24.944078  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:24.944127  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:24.977481  213635 cri.go:89] found id: ""
	I0414 17:48:24.977512  213635 logs.go:282] 0 containers: []
	W0414 17:48:24.977522  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:24.977529  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:24.977589  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:25.010063  213635 cri.go:89] found id: ""
	I0414 17:48:25.010087  213635 logs.go:282] 0 containers: []
	W0414 17:48:25.010094  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:25.010103  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:25.010114  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:25.062645  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:25.062680  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:25.077120  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:25.077144  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:25.151533  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:25.151553  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:25.151565  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:25.230945  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:25.230985  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:27.774758  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:27.789640  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:27.789692  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:27.822128  213635 cri.go:89] found id: ""
	I0414 17:48:27.822162  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.822169  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:27.822175  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:27.822227  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:27.858364  213635 cri.go:89] found id: ""
	I0414 17:48:27.858394  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.858401  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:27.858406  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:27.858452  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:27.893587  213635 cri.go:89] found id: ""
	I0414 17:48:27.893618  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.893628  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:27.893636  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:27.893695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:27.930766  213635 cri.go:89] found id: ""
	I0414 17:48:27.930799  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.930810  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:27.930817  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:27.930879  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:27.962936  213635 cri.go:89] found id: ""
	I0414 17:48:27.962966  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.962977  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:27.962983  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:27.963036  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:27.999471  213635 cri.go:89] found id: ""
	I0414 17:48:27.999503  213635 logs.go:282] 0 containers: []
	W0414 17:48:27.999511  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:27.999517  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:27.999575  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:28.030604  213635 cri.go:89] found id: ""
	I0414 17:48:28.030636  213635 logs.go:282] 0 containers: []
	W0414 17:48:28.030645  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:28.030650  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:28.030704  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:28.066407  213635 cri.go:89] found id: ""
	I0414 17:48:28.066436  213635 logs.go:282] 0 containers: []
	W0414 17:48:28.066446  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:28.066457  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:28.066471  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:28.118182  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:28.118210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:28.131007  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:28.131031  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:28.198468  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:28.198488  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:28.198500  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:24.283310  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:26.283749  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:27.386845  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:29.387642  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:28.286352  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:28.286387  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:30.826694  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:30.839877  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:30.839949  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:30.873980  213635 cri.go:89] found id: ""
	I0414 17:48:30.874010  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.874021  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:30.874028  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:30.874087  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:30.909567  213635 cri.go:89] found id: ""
	I0414 17:48:30.909593  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.909600  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:30.909606  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:30.909661  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:30.943382  213635 cri.go:89] found id: ""
	I0414 17:48:30.943414  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.943424  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:30.943431  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:30.943487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:30.976444  213635 cri.go:89] found id: ""
	I0414 17:48:30.976477  213635 logs.go:282] 0 containers: []
	W0414 17:48:30.976488  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:30.976496  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:30.976555  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:31.010623  213635 cri.go:89] found id: ""
	I0414 17:48:31.010651  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.010662  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:31.010669  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:31.010727  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:31.049542  213635 cri.go:89] found id: ""
	I0414 17:48:31.049568  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.049578  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:31.049585  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:31.049646  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:31.082301  213635 cri.go:89] found id: ""
	I0414 17:48:31.082326  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.082336  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:31.082343  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:31.082403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:31.115742  213635 cri.go:89] found id: ""
	I0414 17:48:31.115768  213635 logs.go:282] 0 containers: []
	W0414 17:48:31.115776  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:31.115784  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:31.115794  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:31.167568  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:31.167598  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:31.180202  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:31.180229  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:31.247958  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:31.247980  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:31.247995  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:31.337341  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:31.337379  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:28.780817  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:30.781721  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:32.782156  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:31.886992  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:34.386180  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:33.892139  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:33.905803  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:33.905884  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:33.945429  213635 cri.go:89] found id: ""
	I0414 17:48:33.945458  213635 logs.go:282] 0 containers: []
	W0414 17:48:33.945468  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:33.945476  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:33.945524  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:33.978018  213635 cri.go:89] found id: ""
	I0414 17:48:33.978047  213635 logs.go:282] 0 containers: []
	W0414 17:48:33.978056  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:33.978063  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:33.978113  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:34.013902  213635 cri.go:89] found id: ""
	I0414 17:48:34.013926  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.013934  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:34.013940  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:34.013986  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:34.052308  213635 cri.go:89] found id: ""
	I0414 17:48:34.052340  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.052351  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:34.052358  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:34.052423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:34.092541  213635 cri.go:89] found id: ""
	I0414 17:48:34.092565  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.092572  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:34.092577  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:34.092638  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:34.126690  213635 cri.go:89] found id: ""
	I0414 17:48:34.126725  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.126736  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:34.126745  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:34.126810  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:34.161043  213635 cri.go:89] found id: ""
	I0414 17:48:34.161072  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.161081  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:34.161087  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:34.161148  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:34.195793  213635 cri.go:89] found id: ""
	I0414 17:48:34.195817  213635 logs.go:282] 0 containers: []
	W0414 17:48:34.195825  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:34.195835  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:34.195847  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:34.238858  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:34.238890  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:34.294092  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:34.294122  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:34.310473  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:34.310510  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:34.377489  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:34.377517  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:34.377535  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:36.963220  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:36.976594  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:36.976663  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:37.009685  213635 cri.go:89] found id: ""
	I0414 17:48:37.009710  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.009720  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:37.009727  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:37.009780  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:37.044805  213635 cri.go:89] found id: ""
	I0414 17:48:37.044832  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.044845  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:37.044852  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:37.044915  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:37.096059  213635 cri.go:89] found id: ""
	I0414 17:48:37.096082  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.096089  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:37.096094  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:37.096146  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:37.132630  213635 cri.go:89] found id: ""
	I0414 17:48:37.132654  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.132664  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:37.132670  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:37.132731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:37.168840  213635 cri.go:89] found id: ""
	I0414 17:48:37.168865  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.168874  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:37.168881  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:37.168940  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:37.202226  213635 cri.go:89] found id: ""
	I0414 17:48:37.202250  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.202258  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:37.202264  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:37.202321  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:37.236649  213635 cri.go:89] found id: ""
	I0414 17:48:37.236677  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.236687  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:37.236695  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:37.236758  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:37.270393  213635 cri.go:89] found id: ""
	I0414 17:48:37.270417  213635 logs.go:282] 0 containers: []
	W0414 17:48:37.270427  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:37.270438  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:37.270454  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:37.320463  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:37.320492  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:37.334355  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:37.334388  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:37.402650  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:37.402674  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:37.402686  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:37.479961  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:37.479999  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:34.782317  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:37.285771  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:36.886679  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:39.386353  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:40.024993  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:40.038522  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:40.038578  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:40.075237  213635 cri.go:89] found id: ""
	I0414 17:48:40.075264  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.075274  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:40.075282  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:40.075342  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:40.117027  213635 cri.go:89] found id: ""
	I0414 17:48:40.117052  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.117059  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:40.117065  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:40.117130  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:40.150149  213635 cri.go:89] found id: ""
	I0414 17:48:40.150181  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.150193  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:40.150201  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:40.150265  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:40.185087  213635 cri.go:89] found id: ""
	I0414 17:48:40.185114  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.185122  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:40.185128  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:40.185179  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:40.219050  213635 cri.go:89] found id: ""
	I0414 17:48:40.219077  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.219084  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:40.219090  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:40.219137  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:40.252681  213635 cri.go:89] found id: ""
	I0414 17:48:40.252712  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.252723  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:40.252731  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:40.252796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:40.289524  213635 cri.go:89] found id: ""
	I0414 17:48:40.289551  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.289559  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:40.289564  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:40.289622  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:40.322952  213635 cri.go:89] found id: ""
	I0414 17:48:40.322986  213635 logs.go:282] 0 containers: []
	W0414 17:48:40.322998  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:40.323009  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:40.323023  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:40.375012  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:40.375046  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:40.389868  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:40.389900  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:40.456829  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:40.456849  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:40.456861  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:40.537720  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:40.537759  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:43.079573  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:43.092754  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:43.092808  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:43.128097  213635 cri.go:89] found id: ""
	I0414 17:48:43.128131  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.128142  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:43.128150  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:43.128210  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:43.161361  213635 cri.go:89] found id: ""
	I0414 17:48:43.161391  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.161403  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:43.161410  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:43.161470  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:43.196698  213635 cri.go:89] found id: ""
	I0414 17:48:43.196780  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.196796  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:43.196807  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:43.196870  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:43.230687  213635 cri.go:89] found id: ""
	I0414 17:48:43.230717  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.230724  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:43.230729  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:43.230790  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:43.272118  213635 cri.go:89] found id: ""
	I0414 17:48:43.272143  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.272149  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:43.272155  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:43.272212  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:39.285905  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:41.782863  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:41.387417  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:43.886997  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:44.312670  212456 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.671544959s)
	I0414 17:48:44.312762  212456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:48:44.332203  212456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:48:44.347886  212456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:48:44.360967  212456 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:48:44.360988  212456 kubeadm.go:157] found existing configuration files:
	
	I0414 17:48:44.361036  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0414 17:48:44.374271  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:48:44.374334  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:48:44.391104  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0414 17:48:44.407332  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:48:44.407386  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:48:44.418237  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0414 17:48:44.427328  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:48:44.427373  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:48:44.437284  212456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0414 17:48:44.446412  212456 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:48:44.446459  212456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:48:44.455796  212456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:48:44.629587  212456 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:48:43.305507  213635 cri.go:89] found id: ""
	I0414 17:48:43.305544  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.305557  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:43.305567  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:43.305667  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:43.342294  213635 cri.go:89] found id: ""
	I0414 17:48:43.342328  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.342339  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:43.342346  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:43.342403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:43.374476  213635 cri.go:89] found id: ""
	I0414 17:48:43.374502  213635 logs.go:282] 0 containers: []
	W0414 17:48:43.374510  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:43.374519  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:43.374529  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:43.429817  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:43.429869  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:43.446168  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:43.446205  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:43.562603  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:43.562629  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:43.562647  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:43.647833  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:43.647873  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:46.192567  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:46.205502  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:46.205572  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:46.241592  213635 cri.go:89] found id: ""
	I0414 17:48:46.241618  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.241628  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:46.241635  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:46.241698  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:46.276977  213635 cri.go:89] found id: ""
	I0414 17:48:46.277004  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.277014  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:46.277020  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:46.277079  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:46.312906  213635 cri.go:89] found id: ""
	I0414 17:48:46.312930  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.312939  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:46.312946  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:46.313007  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:46.346994  213635 cri.go:89] found id: ""
	I0414 17:48:46.347018  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.347026  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:46.347031  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:46.347077  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:46.380069  213635 cri.go:89] found id: ""
	I0414 17:48:46.380093  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.380104  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:46.380111  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:46.380172  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:46.416546  213635 cri.go:89] found id: ""
	I0414 17:48:46.416574  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.416584  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:46.416592  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:46.416652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:46.453343  213635 cri.go:89] found id: ""
	I0414 17:48:46.453374  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.453386  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:46.453393  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:46.453447  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:46.490450  213635 cri.go:89] found id: ""
	I0414 17:48:46.490479  213635 logs.go:282] 0 containers: []
	W0414 17:48:46.490489  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:46.490499  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:46.490513  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:46.551507  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:46.551542  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:46.565243  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:46.565272  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:46.636609  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:46.636634  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:46.636651  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:46.715829  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:46.715872  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:44.284758  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.782687  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.386592  212269 pod_ready.go:103] pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:46.880932  212269 pod_ready.go:82] duration metric: took 4m0.000148322s for pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace to be "Ready" ...
	E0414 17:48:46.880964  212269 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-q95ck" in "kube-system" namespace to be "Ready" (will not retry!)
	I0414 17:48:46.880988  212269 pod_ready.go:39] duration metric: took 4m15.038784615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:46.881025  212269 kubeadm.go:597] duration metric: took 4m58.434849831s to restartPrimaryControlPlane
	W0414 17:48:46.881139  212269 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:48:46.881174  212269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:48:52.039840  212456 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:48:52.039919  212456 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:48:52.040033  212456 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:48:52.040172  212456 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:48:52.040311  212456 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:48:52.040403  212456 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:48:52.041680  212456 out.go:235]   - Generating certificates and keys ...
	I0414 17:48:52.041782  212456 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:48:52.041901  212456 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:48:52.042004  212456 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:48:52.042135  212456 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:48:52.042241  212456 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:48:52.042329  212456 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:48:52.042439  212456 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:48:52.042525  212456 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:48:52.042625  212456 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:48:52.042746  212456 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:48:52.042810  212456 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:48:52.042895  212456 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:48:52.042961  212456 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:48:52.043020  212456 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:48:52.043068  212456 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:48:52.043153  212456 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:48:52.043209  212456 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:48:52.043309  212456 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:48:52.043396  212456 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:48:52.044723  212456 out.go:235]   - Booting up control plane ...
	I0414 17:48:52.044821  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:48:52.044934  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:48:52.045009  212456 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:48:52.045114  212456 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:48:52.045213  212456 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:48:52.045252  212456 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:48:52.045398  212456 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:48:52.045503  212456 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:48:52.045581  212456 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.205474ms
	I0414 17:48:52.045662  212456 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:48:52.045714  212456 kubeadm.go:310] [api-check] The API server is healthy after 4.502044755s
	I0414 17:48:52.045804  212456 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:48:52.045996  212456 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:48:52.046104  212456 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:48:52.046335  212456 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-061428 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:48:52.046423  212456 kubeadm.go:310] [bootstrap-token] Using token: 0x0swo.cnocxvbqul1ca541
	I0414 17:48:52.047605  212456 out.go:235]   - Configuring RBAC rules ...
	I0414 17:48:52.047713  212456 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:48:52.047795  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:48:52.047959  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:48:52.048082  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:48:52.048237  212456 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:48:52.048315  212456 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:48:52.048413  212456 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:48:52.048451  212456 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:48:52.048491  212456 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:48:52.048496  212456 kubeadm.go:310] 
	I0414 17:48:52.048549  212456 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:48:52.048555  212456 kubeadm.go:310] 
	I0414 17:48:52.048618  212456 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:48:52.048629  212456 kubeadm.go:310] 
	I0414 17:48:52.048653  212456 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:48:52.048710  212456 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:48:52.048756  212456 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:48:52.048762  212456 kubeadm.go:310] 
	I0414 17:48:52.048819  212456 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:48:52.048829  212456 kubeadm.go:310] 
	I0414 17:48:52.048872  212456 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:48:52.048878  212456 kubeadm.go:310] 
	I0414 17:48:52.048920  212456 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:48:52.048983  212456 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:48:52.049046  212456 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:48:52.049053  212456 kubeadm.go:310] 
	I0414 17:48:52.049156  212456 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:48:52.049245  212456 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:48:52.049251  212456 kubeadm.go:310] 
	I0414 17:48:52.049325  212456 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 0x0swo.cnocxvbqul1ca541 \
	I0414 17:48:52.049412  212456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:48:52.049431  212456 kubeadm.go:310] 	--control-plane 
	I0414 17:48:52.049437  212456 kubeadm.go:310] 
	I0414 17:48:52.049511  212456 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:48:52.049517  212456 kubeadm.go:310] 
	I0414 17:48:52.049584  212456 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 0x0swo.cnocxvbqul1ca541 \
	I0414 17:48:52.049724  212456 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:48:52.049740  212456 cni.go:84] Creating CNI manager for ""
	I0414 17:48:52.049793  212456 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:48:52.051076  212456 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:48:52.052229  212456 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:48:52.062677  212456 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:48:52.080923  212456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:48:52.081020  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:52.081077  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-061428 minikube.k8s.io/updated_at=2025_04_14T17_48_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=default-k8s-diff-port-061428 minikube.k8s.io/primary=true
	I0414 17:48:52.125288  212456 ops.go:34] apiserver oom_adj: -16
	I0414 17:48:52.342710  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:52.842859  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:49.255006  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:49.277839  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:49.277915  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:49.340015  213635 cri.go:89] found id: ""
	I0414 17:48:49.340051  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.340063  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:49.340071  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:49.340143  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:49.375879  213635 cri.go:89] found id: ""
	I0414 17:48:49.375907  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.375917  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:49.375924  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:49.375987  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:49.408770  213635 cri.go:89] found id: ""
	I0414 17:48:49.408796  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.408806  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:49.408813  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:49.408877  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:49.446644  213635 cri.go:89] found id: ""
	I0414 17:48:49.446673  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.446682  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:49.446690  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:49.446758  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:49.486858  213635 cri.go:89] found id: ""
	I0414 17:48:49.486887  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.486897  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:49.486904  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:49.486964  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:49.525400  213635 cri.go:89] found id: ""
	I0414 17:48:49.525427  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.525437  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:49.525445  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:49.525507  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:49.559553  213635 cri.go:89] found id: ""
	I0414 17:48:49.559578  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.559587  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:49.559595  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:49.559656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:49.591090  213635 cri.go:89] found id: ""
	I0414 17:48:49.591123  213635 logs.go:282] 0 containers: []
	W0414 17:48:49.591131  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:49.591144  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:49.591155  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:49.643807  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:49.643841  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:49.657066  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:49.657090  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:49.729359  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:49.729388  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:49.729404  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:49.808543  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:49.808573  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:52.348426  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:52.366010  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:52.366076  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:52.404950  213635 cri.go:89] found id: ""
	I0414 17:48:52.404976  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.404985  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:52.404991  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:52.405046  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:52.445893  213635 cri.go:89] found id: ""
	I0414 17:48:52.445927  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.445937  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:52.445945  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:52.446011  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:52.479635  213635 cri.go:89] found id: ""
	I0414 17:48:52.479657  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.479664  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:52.479671  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:52.479738  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:52.523616  213635 cri.go:89] found id: ""
	I0414 17:48:52.523650  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.523661  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:52.523669  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:52.523730  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:52.571706  213635 cri.go:89] found id: ""
	I0414 17:48:52.571739  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.571751  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:52.571758  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:52.571826  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:52.616799  213635 cri.go:89] found id: ""
	I0414 17:48:52.616822  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.616831  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:52.616836  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:52.616901  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:52.652373  213635 cri.go:89] found id: ""
	I0414 17:48:52.652402  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.652413  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:52.652420  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:52.652481  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:52.689582  213635 cri.go:89] found id: ""
	I0414 17:48:52.689614  213635 logs.go:282] 0 containers: []
	W0414 17:48:52.689626  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:52.689637  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:52.689651  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:52.741215  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:52.741254  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:52.757324  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:52.757361  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:52.828589  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:52.828609  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:52.828621  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:52.918483  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:52.918527  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:49.290709  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:51.781114  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:53.343155  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:53.842838  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:54.343070  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:54.843789  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.342935  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.843502  212456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:48:55.939704  212456 kubeadm.go:1113] duration metric: took 3.858757705s to wait for elevateKubeSystemPrivileges
	I0414 17:48:55.939738  212456 kubeadm.go:394] duration metric: took 5m0.143792732s to StartCluster
	I0414 17:48:55.939772  212456 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:48:55.939872  212456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:48:55.941014  212456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:48:55.941300  212456 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:48:55.941438  212456 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:48:55.941538  212456 config.go:182] Loaded profile config "default-k8s-diff-port-061428": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:48:55.941554  212456 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941576  212456 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941591  212456 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-061428"
	I0414 17:48:55.941600  212456 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-061428"
	I0414 17:48:55.941602  212456 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.941601  212456 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-061428"
	W0414 17:48:55.941614  212456 addons.go:247] addon dashboard should already be in state true
	I0414 17:48:55.941622  212456 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-061428"
	W0414 17:48:55.941645  212456 addons.go:247] addon metrics-server should already be in state true
	I0414 17:48:55.941654  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.941580  212456 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.941676  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	W0414 17:48:55.941703  212456 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:48:55.941749  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.942083  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942123  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942152  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942089  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942265  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942088  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.942329  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.942159  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.943212  212456 out.go:177] * Verifying Kubernetes components...
	I0414 17:48:55.944529  212456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:48:55.961205  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
	I0414 17:48:55.961205  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0414 17:48:55.961207  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0414 17:48:55.961746  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.961764  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.961872  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.962378  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962406  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962382  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962446  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962515  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.962533  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.962928  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963036  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963098  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.963185  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.963383  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0414 17:48:55.963645  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.963676  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.963884  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.963930  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.964392  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.964780  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.964796  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.965235  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.965735  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.965770  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.966920  212456 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-061428"
	W0414 17:48:55.966941  212456 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:48:55.966965  212456 host.go:66] Checking if "default-k8s-diff-port-061428" exists ...
	I0414 17:48:55.967303  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:55.967339  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:55.981120  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0414 17:48:55.981603  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.982500  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.982521  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.982919  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.983222  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.983374  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0414 17:48:55.983676  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.987256  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.987275  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0414 17:48:55.987392  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.987404  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.987825  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:55.988138  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.988179  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:55.988192  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:55.988507  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:55.988780  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.988791  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:55.989758  212456 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:48:55.991265  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:48:55.991271  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.991283  212456 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:48:55.991300  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:55.992806  212456 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:48:55.993944  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:55.995202  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:55.995700  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:55.995715  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:55.995878  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:55.995970  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:55.996048  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:55.996310  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:55.998615  212456 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:48:55.998632  212456 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:48:55.999859  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:48:55.999877  212456 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:48:55.999893  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.000008  212456 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:48:56.000031  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:48:56.000048  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.003728  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004208  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.004226  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004232  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004445  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.004661  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.004738  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.004762  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.004788  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.004926  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.005143  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.005294  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.005400  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.005546  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.015091  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0414 17:48:56.015439  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:56.015805  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:56.015814  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:56.016147  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:56.016520  212456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:48:56.016543  212456 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:48:56.032058  212456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0414 17:48:56.032451  212456 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:48:56.032966  212456 main.go:141] libmachine: Using API Version  1
	I0414 17:48:56.032988  212456 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:48:56.033343  212456 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:48:56.033531  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetState
	I0414 17:48:56.035026  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .DriverName
	I0414 17:48:56.035244  212456 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:48:56.035267  212456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:48:56.035289  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHHostname
	I0414 17:48:56.037961  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.039361  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:77:2e", ip: ""} in network mk-default-k8s-diff-port-061428: {Iface:virbr3 ExpiryTime:2025-04-14 18:43:42 +0000 UTC Type:0 Mac:52:54:00:b1:77:2e Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-061428 Clientid:01:52:54:00:b1:77:2e}
	I0414 17:48:56.039393  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | domain default-k8s-diff-port-061428 has defined IP address 192.168.61.196 and MAC address 52:54:00:b1:77:2e in network mk-default-k8s-diff-port-061428
	I0414 17:48:56.042043  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHPort
	I0414 17:48:56.042282  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHKeyPath
	I0414 17:48:56.044137  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .GetSSHUsername
	I0414 17:48:56.044613  212456 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/default-k8s-diff-port-061428/id_rsa Username:docker}
	I0414 17:48:56.170857  212456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:48:56.201264  212456 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-061428" to be "Ready" ...
	I0414 17:48:56.215666  212456 node_ready.go:49] node "default-k8s-diff-port-061428" has status "Ready":"True"
	I0414 17:48:56.215687  212456 node_ready.go:38] duration metric: took 14.390119ms for node "default-k8s-diff-port-061428" to be "Ready" ...
	I0414 17:48:56.215698  212456 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:48:56.219556  212456 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:48:56.325515  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:48:56.328344  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:48:56.328369  212456 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:48:56.366616  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:48:56.366644  212456 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:48:56.366924  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:48:56.366947  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:48:56.400343  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:48:56.400365  212456 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:48:56.403134  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:48:56.450599  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:48:56.450631  212456 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:48:56.474003  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:48:56.474030  212456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:48:56.564681  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:48:56.564716  212456 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:48:56.565092  212456 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:48:56.565114  212456 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:48:56.634647  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:48:56.667139  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:48:56.667170  212456 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:48:56.800483  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:48:56.800513  212456 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:48:56.844350  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:48:56.844380  212456 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:48:56.924656  212456 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:48:56.924693  212456 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:48:57.009703  212456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:48:57.322557  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322593  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322574  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322695  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322923  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.322939  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.322953  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.322961  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.322979  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.322998  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.323007  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.323016  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.324913  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.324970  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.324986  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.324997  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.325005  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.325019  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.345450  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.345469  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.345740  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.345761  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.943361  212456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.308667432s)
	I0414 17:48:57.943408  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.943422  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.943797  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.943831  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.943842  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:57.943851  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:57.943880  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.944243  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:57.944262  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:57.944275  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:57.944294  212456 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-061428"
	I0414 17:48:55.461925  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:55.475396  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:55.475472  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:55.511338  213635 cri.go:89] found id: ""
	I0414 17:48:55.511366  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.511374  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:55.511381  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:55.511444  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:55.547324  213635 cri.go:89] found id: ""
	I0414 17:48:55.547348  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.547355  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:55.547366  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:55.547423  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:55.593274  213635 cri.go:89] found id: ""
	I0414 17:48:55.593303  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.593314  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:55.593322  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:55.593386  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:55.628013  213635 cri.go:89] found id: ""
	I0414 17:48:55.628042  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.628053  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:55.628060  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:55.628127  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:55.663752  213635 cri.go:89] found id: ""
	I0414 17:48:55.663786  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.663798  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:55.663805  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:55.663867  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:55.700578  213635 cri.go:89] found id: ""
	I0414 17:48:55.700601  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.700609  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:55.700614  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:55.700661  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:55.733772  213635 cri.go:89] found id: ""
	I0414 17:48:55.733797  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.733805  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:55.733811  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:55.733891  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:55.769135  213635 cri.go:89] found id: ""
	I0414 17:48:55.769161  213635 logs.go:282] 0 containers: []
	W0414 17:48:55.769174  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:55.769184  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:55.769196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:48:55.810526  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:55.810560  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:55.863132  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:55.863166  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:55.879346  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:55.879381  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:55.961385  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:55.961403  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:55.961418  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:53.781674  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:55.784266  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.283947  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.225462  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:59.380615  212456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.370840717s)
	I0414 17:48:59.380686  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:59.380701  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:59.381003  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:59.381024  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:59.381039  212456 main.go:141] libmachine: Making call to close driver server
	I0414 17:48:59.381047  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) Calling .Close
	I0414 17:48:59.381256  212456 main.go:141] libmachine: (default-k8s-diff-port-061428) DBG | Closing plugin on server side
	I0414 17:48:59.381286  212456 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:48:59.381299  212456 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:48:59.382695  212456 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-061428 addons enable metrics-server
	
	I0414 17:48:59.383922  212456 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0414 17:48:59.385040  212456 addons.go:514] duration metric: took 3.443627022s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0414 17:49:00.227357  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:02.723936  212456 pod_ready.go:103] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"False"
	I0414 17:48:58.566639  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:48:58.580841  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:48:58.580906  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:48:58.620613  213635 cri.go:89] found id: ""
	I0414 17:48:58.620647  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.620659  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:48:58.620668  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:48:58.620736  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:48:58.661513  213635 cri.go:89] found id: ""
	I0414 17:48:58.661549  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.661559  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:48:58.661567  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:48:58.661637  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:48:58.710480  213635 cri.go:89] found id: ""
	I0414 17:48:58.710512  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.710524  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:48:58.710531  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:48:58.710594  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:48:58.755300  213635 cri.go:89] found id: ""
	I0414 17:48:58.755328  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.755339  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:48:58.755346  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:48:58.755403  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:48:58.791364  213635 cri.go:89] found id: ""
	I0414 17:48:58.791396  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.791416  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:48:58.791424  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:48:58.791490  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:48:58.830571  213635 cri.go:89] found id: ""
	I0414 17:48:58.830598  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.830610  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:48:58.830617  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:48:58.830677  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:48:58.864897  213635 cri.go:89] found id: ""
	I0414 17:48:58.864924  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.864933  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:48:58.864940  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:48:58.865000  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:48:58.900362  213635 cri.go:89] found id: ""
	I0414 17:48:58.900393  213635 logs.go:282] 0 containers: []
	W0414 17:48:58.900403  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:48:58.900414  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:48:58.900431  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:48:58.953300  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:48:58.953340  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:48:58.974592  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:48:58.974634  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:48:59.054206  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:48:59.054234  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:48:59.054251  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:48:59.137354  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:48:59.137390  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:01.684252  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:01.702697  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:01.702776  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:01.746204  213635 cri.go:89] found id: ""
	I0414 17:49:01.746232  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.746276  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:01.746284  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:01.746347  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:01.784544  213635 cri.go:89] found id: ""
	I0414 17:49:01.784574  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.784584  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:01.784591  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:01.784649  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:01.821353  213635 cri.go:89] found id: ""
	I0414 17:49:01.821382  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.821392  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:01.821399  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:01.821454  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:01.855681  213635 cri.go:89] found id: ""
	I0414 17:49:01.855707  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.855715  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:01.855723  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:01.855783  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:01.891114  213635 cri.go:89] found id: ""
	I0414 17:49:01.891142  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.891153  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:01.891161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:01.891230  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:01.926536  213635 cri.go:89] found id: ""
	I0414 17:49:01.926570  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.926581  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:01.926588  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:01.926648  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:01.971430  213635 cri.go:89] found id: ""
	I0414 17:49:01.971455  213635 logs.go:282] 0 containers: []
	W0414 17:49:01.971462  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:01.971468  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:01.971513  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:02.010416  213635 cri.go:89] found id: ""
	I0414 17:49:02.010444  213635 logs.go:282] 0 containers: []
	W0414 17:49:02.010452  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:02.010461  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:02.010476  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:02.093422  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:02.093451  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:02.093468  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:02.175219  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:02.175256  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:02.216929  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:02.216957  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:02.269151  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:02.269188  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:00.784029  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:03.284820  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:03.725360  212456 pod_ready.go:93] pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.725386  212456 pod_ready.go:82] duration metric: took 7.505806576s for pod "etcd-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.725396  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.729623  212456 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.729653  212456 pod_ready.go:82] duration metric: took 4.248954ms for pod "kube-apiserver-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.729668  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.733261  212456 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:03.733283  212456 pod_ready.go:82] duration metric: took 3.605315ms for pod "kube-controller-manager-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:03.733294  212456 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:04.239874  212456 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:04.239896  212456 pod_ready.go:82] duration metric: took 506.59428ms for pod "kube-scheduler-default-k8s-diff-port-061428" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:04.239904  212456 pod_ready.go:39] duration metric: took 8.024194625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:04.239919  212456 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:49:04.239968  212456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:04.262907  212456 api_server.go:72] duration metric: took 8.321571945s to wait for apiserver process to appear ...
	I0414 17:49:04.262930  212456 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:49:04.262950  212456 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I0414 17:49:04.267486  212456 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I0414 17:49:04.268404  212456 api_server.go:141] control plane version: v1.32.2
	I0414 17:49:04.268420  212456 api_server.go:131] duration metric: took 5.484737ms to wait for apiserver health ...
	I0414 17:49:04.268432  212456 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:49:04.271870  212456 system_pods.go:59] 9 kube-system pods found
	I0414 17:49:04.271899  212456 system_pods.go:61] "coredns-668d6bf9bc-mdntl" [009622fa-7c7c-4903-945f-d2bbf5262a9b] Running
	I0414 17:49:04.271908  212456 system_pods.go:61] "coredns-668d6bf9bc-qhjnc" [97f585f4-e039-4c34-b132-9a56318e7ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:49:04.271918  212456 system_pods.go:61] "etcd-default-k8s-diff-port-061428" [3f7f2d5f-ae4c-4946-952c-9aae0156cf95] Running
	I0414 17:49:04.271924  212456 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-061428" [accdcd02-d8e2-447c-83f2-a6cd0b935b7b] Running
	I0414 17:49:04.271928  212456 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-061428" [08894510-d41c-4e93-b1a9-43888732429b] Running
	I0414 17:49:04.271931  212456 system_pods.go:61] "kube-proxy-2ft7c" [7d0e0148-267c-4421-846e-7d2f8f2f3a14] Running
	I0414 17:49:04.271935  212456 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-061428" [9d32a872-0f66-4f25-81f1-9707372dbc6f] Running
	I0414 17:49:04.271939  212456 system_pods.go:61] "metrics-server-f79f97bbb-g2k8m" [b02b8a70-ae5c-4677-83b5-b817fc733882] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:04.271945  212456 system_pods.go:61] "storage-provisioner" [4d1ccb5e-58d4-43ea-aca2-885ad7af9484] Running
	I0414 17:49:04.271951  212456 system_pods.go:74] duration metric: took 3.508628ms to wait for pod list to return data ...
	I0414 17:49:04.271959  212456 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:49:04.274062  212456 default_sa.go:45] found service account: "default"
	I0414 17:49:04.274080  212456 default_sa.go:55] duration metric: took 2.11536ms for default service account to be created ...
	I0414 17:49:04.274086  212456 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:49:04.324903  212456 system_pods.go:86] 9 kube-system pods found
	I0414 17:49:04.324934  212456 system_pods.go:89] "coredns-668d6bf9bc-mdntl" [009622fa-7c7c-4903-945f-d2bbf5262a9b] Running
	I0414 17:49:04.324947  212456 system_pods.go:89] "coredns-668d6bf9bc-qhjnc" [97f585f4-e039-4c34-b132-9a56318e7ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 17:49:04.324954  212456 system_pods.go:89] "etcd-default-k8s-diff-port-061428" [3f7f2d5f-ae4c-4946-952c-9aae0156cf95] Running
	I0414 17:49:04.324963  212456 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-061428" [accdcd02-d8e2-447c-83f2-a6cd0b935b7b] Running
	I0414 17:49:04.324968  212456 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-061428" [08894510-d41c-4e93-b1a9-43888732429b] Running
	I0414 17:49:04.324974  212456 system_pods.go:89] "kube-proxy-2ft7c" [7d0e0148-267c-4421-846e-7d2f8f2f3a14] Running
	I0414 17:49:04.324979  212456 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-061428" [9d32a872-0f66-4f25-81f1-9707372dbc6f] Running
	I0414 17:49:04.324987  212456 system_pods.go:89] "metrics-server-f79f97bbb-g2k8m" [b02b8a70-ae5c-4677-83b5-b817fc733882] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:04.324993  212456 system_pods.go:89] "storage-provisioner" [4d1ccb5e-58d4-43ea-aca2-885ad7af9484] Running
	I0414 17:49:04.325002  212456 system_pods.go:126] duration metric: took 50.910972ms to wait for k8s-apps to be running ...
	I0414 17:49:04.325021  212456 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:49:04.325080  212456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:04.339750  212456 system_svc.go:56] duration metric: took 14.732403ms WaitForService to wait for kubelet
	I0414 17:49:04.339775  212456 kubeadm.go:582] duration metric: took 8.398444377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:49:04.339798  212456 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:49:04.524559  212456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:49:04.524654  212456 node_conditions.go:123] node cpu capacity is 2
	I0414 17:49:04.524675  212456 node_conditions.go:105] duration metric: took 184.870799ms to run NodePressure ...
	I0414 17:49:04.524690  212456 start.go:241] waiting for startup goroutines ...
	I0414 17:49:04.524701  212456 start.go:246] waiting for cluster config update ...
	I0414 17:49:04.524721  212456 start.go:255] writing updated cluster config ...
	I0414 17:49:04.525044  212456 ssh_runner.go:195] Run: rm -f paused
	I0414 17:49:04.582311  212456 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:49:04.584154  212456 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-061428" cluster and "default" namespace by default
	I0414 17:49:04.787535  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:04.801528  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:04.801604  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:04.838408  213635 cri.go:89] found id: ""
	I0414 17:49:04.838442  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.838458  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:04.838466  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:04.838529  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:04.888614  213635 cri.go:89] found id: ""
	I0414 17:49:04.888645  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.888658  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:04.888667  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:04.888720  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:04.931279  213635 cri.go:89] found id: ""
	I0414 17:49:04.931307  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.931317  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:04.931325  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:04.931461  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:04.970024  213635 cri.go:89] found id: ""
	I0414 17:49:04.970052  213635 logs.go:282] 0 containers: []
	W0414 17:49:04.970061  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:04.970069  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:04.970138  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:05.012914  213635 cri.go:89] found id: ""
	I0414 17:49:05.012938  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.012958  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:05.012967  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:05.013027  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:05.050788  213635 cri.go:89] found id: ""
	I0414 17:49:05.050811  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.050834  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:05.050842  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:05.050905  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:05.090988  213635 cri.go:89] found id: ""
	I0414 17:49:05.091017  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.091028  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:05.091036  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:05.091101  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:05.127104  213635 cri.go:89] found id: ""
	I0414 17:49:05.127138  213635 logs.go:282] 0 containers: []
	W0414 17:49:05.127149  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:05.127160  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:05.127176  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:05.143792  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:05.143828  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:05.218655  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:05.218680  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:05.218697  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:05.306169  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:05.306201  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:05.347150  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:05.347190  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:07.907355  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:07.920775  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:07.920854  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:07.958486  213635 cri.go:89] found id: ""
	I0414 17:49:07.958517  213635 logs.go:282] 0 containers: []
	W0414 17:49:07.958527  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:07.958534  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:07.958600  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:07.995351  213635 cri.go:89] found id: ""
	I0414 17:49:07.995383  213635 logs.go:282] 0 containers: []
	W0414 17:49:07.995394  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:07.995401  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:07.995464  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:08.031830  213635 cri.go:89] found id: ""
	I0414 17:49:08.031864  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.031876  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:08.031885  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:08.031953  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:08.072277  213635 cri.go:89] found id: ""
	I0414 17:49:08.072308  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.072321  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:08.072328  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:08.072400  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:08.107778  213635 cri.go:89] found id: ""
	I0414 17:49:08.107811  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.107823  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:08.107832  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:08.107889  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:08.144220  213635 cri.go:89] found id: ""
	I0414 17:49:08.144254  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.144267  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:08.144276  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:08.144350  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:08.199205  213635 cri.go:89] found id: ""
	I0414 17:49:08.199238  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.199251  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:08.199260  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:08.199329  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:08.236929  213635 cri.go:89] found id: ""
	I0414 17:49:08.236966  213635 logs.go:282] 0 containers: []
	W0414 17:49:08.236978  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:08.236989  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:08.237006  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:05.781883  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:07.782747  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:08.288285  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:08.288309  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:08.301531  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:08.301562  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:08.370610  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:08.370643  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:08.370663  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:08.449517  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:08.449559  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:10.989149  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:11.004705  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:11.004776  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:11.044842  213635 cri.go:89] found id: ""
	I0414 17:49:11.044872  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.044882  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:11.044889  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:11.044944  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:11.079268  213635 cri.go:89] found id: ""
	I0414 17:49:11.079296  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.079306  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:11.079313  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:11.079373  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:11.111894  213635 cri.go:89] found id: ""
	I0414 17:49:11.111921  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.111931  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:11.111937  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:11.111993  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:11.147005  213635 cri.go:89] found id: ""
	I0414 17:49:11.147029  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.147039  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:11.147046  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:11.147115  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:11.181246  213635 cri.go:89] found id: ""
	I0414 17:49:11.181274  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.181281  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:11.181286  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:11.181333  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:11.222368  213635 cri.go:89] found id: ""
	I0414 17:49:11.222396  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.222404  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:11.222409  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:11.222455  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:11.262336  213635 cri.go:89] found id: ""
	I0414 17:49:11.262360  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.262367  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:11.262373  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:11.262430  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:11.305115  213635 cri.go:89] found id: ""
	I0414 17:49:11.305146  213635 logs.go:282] 0 containers: []
	W0414 17:49:11.305157  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:11.305168  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:11.305180  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:11.340697  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:11.340726  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:11.390526  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:11.390566  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:11.403671  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:11.403699  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:11.478187  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:11.478210  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:11.478225  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:10.282583  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:12.781281  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:14.950237  212269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.069030835s)
	I0414 17:49:14.950306  212269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:14.971834  212269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:49:14.987342  212269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:49:15.000668  212269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:49:15.000687  212269 kubeadm.go:157] found existing configuration files:
	
	I0414 17:49:15.000752  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:49:15.020443  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:49:15.020492  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:49:15.037229  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:49:15.049591  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:49:15.049642  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:49:15.059769  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:49:15.077786  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:49:15.077853  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:49:15.089728  212269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:49:15.100674  212269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:49:15.100715  212269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:49:15.111637  212269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:49:15.291703  212269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:49:14.068187  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:14.082429  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:14.082502  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:14.118294  213635 cri.go:89] found id: ""
	I0414 17:49:14.118322  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.118333  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:14.118339  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:14.118399  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:14.150631  213635 cri.go:89] found id: ""
	I0414 17:49:14.150661  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.150673  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:14.150680  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:14.150739  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:14.182138  213635 cri.go:89] found id: ""
	I0414 17:49:14.182168  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.182178  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:14.182191  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:14.182245  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:14.215897  213635 cri.go:89] found id: ""
	I0414 17:49:14.215926  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.215939  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:14.215945  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:14.216007  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:14.250709  213635 cri.go:89] found id: ""
	I0414 17:49:14.250735  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.250745  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:14.250752  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:14.250827  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:14.284335  213635 cri.go:89] found id: ""
	I0414 17:49:14.284359  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.284369  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:14.284377  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:14.284437  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:14.320670  213635 cri.go:89] found id: ""
	I0414 17:49:14.320695  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.320705  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:14.320712  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:14.320772  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:14.352588  213635 cri.go:89] found id: ""
	I0414 17:49:14.352612  213635 logs.go:282] 0 containers: []
	W0414 17:49:14.352620  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:14.352630  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:14.352643  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:14.402495  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:14.402527  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:14.415185  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:14.415211  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:14.484937  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:14.484961  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:14.484976  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:14.568927  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:14.568962  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:17.105989  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:17.119732  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:17.119803  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:17.155999  213635 cri.go:89] found id: ""
	I0414 17:49:17.156027  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.156038  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:17.156046  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:17.156117  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:17.190158  213635 cri.go:89] found id: ""
	I0414 17:49:17.190180  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.190188  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:17.190193  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:17.190254  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:17.228075  213635 cri.go:89] found id: ""
	I0414 17:49:17.228116  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.228128  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:17.228135  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:17.228199  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:17.276284  213635 cri.go:89] found id: ""
	I0414 17:49:17.276311  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.276321  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:17.276328  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:17.276391  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:17.323644  213635 cri.go:89] found id: ""
	I0414 17:49:17.323672  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.323684  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:17.323691  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:17.323755  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:17.361870  213635 cri.go:89] found id: ""
	I0414 17:49:17.361898  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.361910  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:17.361917  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:17.361978  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:17.396346  213635 cri.go:89] found id: ""
	I0414 17:49:17.396371  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.396382  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:17.396389  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:17.396450  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:17.434395  213635 cri.go:89] found id: ""
	I0414 17:49:17.434425  213635 logs.go:282] 0 containers: []
	W0414 17:49:17.434434  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:17.434445  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:17.434460  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:17.486946  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:17.486987  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:17.504167  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:17.504200  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:17.596627  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:17.596655  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:17.596671  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:17.688874  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:17.688911  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:15.285389  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:17.783942  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:20.238457  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:20.252780  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:20.252859  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:20.299511  213635 cri.go:89] found id: ""
	I0414 17:49:20.299535  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.299543  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:20.299549  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:20.299607  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:20.346458  213635 cri.go:89] found id: ""
	I0414 17:49:20.346484  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.346493  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:20.346500  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:20.346552  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:20.390657  213635 cri.go:89] found id: ""
	I0414 17:49:20.390677  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.390684  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:20.390689  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:20.390738  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:20.435444  213635 cri.go:89] found id: ""
	I0414 17:49:20.435468  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.435474  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:20.435480  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:20.435520  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:20.470010  213635 cri.go:89] found id: ""
	I0414 17:49:20.470030  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.470036  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:20.470044  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:20.470089  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:20.517097  213635 cri.go:89] found id: ""
	I0414 17:49:20.517130  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.517141  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:20.517149  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:20.517216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:20.558688  213635 cri.go:89] found id: ""
	I0414 17:49:20.558717  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.558727  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:20.558733  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:20.558796  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:20.598644  213635 cri.go:89] found id: ""
	I0414 17:49:20.598679  213635 logs.go:282] 0 containers: []
	W0414 17:49:20.598687  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:20.598695  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:20.598706  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:20.674514  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:20.674571  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:20.691779  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:20.691808  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:20.759608  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:20.759640  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:20.759652  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:20.852072  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:20.852104  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:23.435254  212269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:49:23.435346  212269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:49:23.435469  212269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:49:23.435587  212269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:49:23.435698  212269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:49:23.435786  212269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:49:23.437325  212269 out.go:235]   - Generating certificates and keys ...
	I0414 17:49:23.437460  212269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:49:23.437553  212269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:49:23.437665  212269 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:49:23.437786  212269 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:49:23.437914  212269 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:49:23.438026  212269 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:49:23.438157  212269 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:49:23.438253  212269 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:49:23.438370  212269 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:49:23.438493  212269 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:49:23.438556  212269 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:49:23.438629  212269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:49:23.438700  212269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:49:23.438783  212269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:49:23.438855  212269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:49:23.438939  212269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:49:23.439013  212269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:49:23.439123  212269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:49:23.439213  212269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:49:23.440637  212269 out.go:235]   - Booting up control plane ...
	I0414 17:49:23.440748  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:49:23.440847  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:49:23.440957  212269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:49:23.441124  212269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:49:23.441250  212269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:49:23.441317  212269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:49:23.441508  212269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:49:23.441668  212269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:49:23.441883  212269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001443308s
	I0414 17:49:23.442009  212269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:49:23.442095  212269 kubeadm.go:310] [api-check] The API server is healthy after 5.001630109s
	I0414 17:49:23.442250  212269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:49:23.442407  212269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:49:23.442500  212269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:49:23.442809  212269 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-721806 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:49:23.442894  212269 kubeadm.go:310] [bootstrap-token] Using token: hi4egh.pplxy8fivi6fy4jt
	I0414 17:49:23.444130  212269 out.go:235]   - Configuring RBAC rules ...
	I0414 17:49:23.444269  212269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:49:23.444373  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:49:23.444555  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:49:23.444724  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:49:23.444870  212269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:49:23.444983  212269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:49:23.445140  212269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:49:23.445205  212269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:49:23.445269  212269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:49:23.445279  212269 kubeadm.go:310] 
	I0414 17:49:23.445361  212269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:49:23.445373  212269 kubeadm.go:310] 
	I0414 17:49:23.445471  212269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:49:23.445483  212269 kubeadm.go:310] 
	I0414 17:49:23.445514  212269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:49:23.445592  212269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:49:23.445659  212269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:49:23.445669  212269 kubeadm.go:310] 
	I0414 17:49:23.445746  212269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:49:23.445756  212269 kubeadm.go:310] 
	I0414 17:49:23.445816  212269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:49:23.445896  212269 kubeadm.go:310] 
	I0414 17:49:23.445976  212269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:49:23.446046  212269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:49:23.446113  212269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:49:23.446122  212269 kubeadm.go:310] 
	I0414 17:49:23.446188  212269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:49:23.446250  212269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:49:23.446255  212269 kubeadm.go:310] 
	I0414 17:49:23.446323  212269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hi4egh.pplxy8fivi6fy4jt \
	I0414 17:49:23.446414  212269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:49:23.446434  212269 kubeadm.go:310] 	--control-plane 
	I0414 17:49:23.446438  212269 kubeadm.go:310] 
	I0414 17:49:23.446507  212269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:49:23.446513  212269 kubeadm.go:310] 
	I0414 17:49:23.446582  212269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hi4egh.pplxy8fivi6fy4jt \
	I0414 17:49:23.446707  212269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:49:23.446730  212269 cni.go:84] Creating CNI manager for ""
	I0414 17:49:23.446739  212269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:49:23.448085  212269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:49:20.288087  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:22.783079  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:23.449087  212269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:49:23.461577  212269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:49:23.480701  212269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:49:23.480761  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:23.480789  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-721806 minikube.k8s.io/updated_at=2025_04_14T17_49_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=no-preload-721806 minikube.k8s.io/primary=true
	I0414 17:49:23.822239  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:23.822379  212269 ops.go:34] apiserver oom_adj: -16
	I0414 17:49:24.322913  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:24.822958  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:25.322967  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:25.823342  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:26.322688  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:26.822585  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.322370  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.823299  212269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:49:27.966937  212269 kubeadm.go:1113] duration metric: took 4.486233002s to wait for elevateKubeSystemPrivileges
	I0414 17:49:27.966971  212269 kubeadm.go:394] duration metric: took 5m39.576838178s to StartCluster
	I0414 17:49:27.966992  212269 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:49:27.967081  212269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:49:27.968121  212269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:49:27.968336  212269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:49:27.968477  212269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:49:27.968572  212269 config.go:182] Loaded profile config "no-preload-721806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:49:27.968640  212269 addons.go:69] Setting storage-provisioner=true in profile "no-preload-721806"
	I0414 17:49:27.968663  212269 addons.go:238] Setting addon storage-provisioner=true in "no-preload-721806"
	I0414 17:49:27.968667  212269 addons.go:69] Setting default-storageclass=true in profile "no-preload-721806"
	I0414 17:49:27.968685  212269 addons.go:69] Setting dashboard=true in profile "no-preload-721806"
	I0414 17:49:27.968689  212269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-721806"
	W0414 17:49:27.968693  212269 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:49:27.968698  212269 addons.go:69] Setting metrics-server=true in profile "no-preload-721806"
	I0414 17:49:27.968701  212269 addons.go:238] Setting addon dashboard=true in "no-preload-721806"
	W0414 17:49:27.968711  212269 addons.go:247] addon dashboard should already be in state true
	I0414 17:49:27.968713  212269 addons.go:238] Setting addon metrics-server=true in "no-preload-721806"
	W0414 17:49:27.968720  212269 addons.go:247] addon metrics-server should already be in state true
	I0414 17:49:27.968725  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.968737  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.968748  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.969136  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969159  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969174  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969190  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969136  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969242  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969294  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.969328  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.969547  212269 out.go:177] * Verifying Kubernetes components...
	I0414 17:49:27.970928  212269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:49:27.985862  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0414 17:49:27.985940  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0414 17:49:27.986359  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.986478  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.986876  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.986894  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.987035  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.987050  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.987339  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.987522  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:27.987561  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.988294  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.988321  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.988647  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I0414 17:49:27.989258  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.990683  212269 addons.go:238] Setting addon default-storageclass=true in "no-preload-721806"
	W0414 17:49:27.990703  212269 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:49:27.990734  212269 host.go:66] Checking if "no-preload-721806" exists ...
	I0414 17:49:27.991093  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.991124  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.991371  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0414 17:49:27.991468  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.991483  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.991880  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.992418  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.992453  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:27.992667  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:27.993166  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:27.993181  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:27.993592  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:27.994151  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:27.994179  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:28.006693  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34701
	I0414 17:49:28.006725  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0414 17:49:28.007104  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.007150  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.007487  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.007500  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.007611  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.007630  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.007860  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.008020  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.008067  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.008548  212269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:49:28.008586  212269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:49:28.010355  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.011939  212269 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:49:28.012527  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0414 17:49:28.013128  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.013676  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.013704  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.013896  212269 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:49:28.014150  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.014326  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.014618  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0414 17:49:28.014827  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:49:28.014838  212269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:49:28.014860  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.015140  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.015587  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.015603  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.016012  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.016211  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.016728  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.018254  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.018509  212269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:49:28.018914  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.019375  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.019390  212269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:49:23.392749  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:23.409465  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:23.409526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:23.449515  213635 cri.go:89] found id: ""
	I0414 17:49:23.449542  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.449552  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:23.449559  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:23.449609  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:23.490201  213635 cri.go:89] found id: ""
	I0414 17:49:23.490225  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.490234  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:23.490242  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:23.490294  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:23.528644  213635 cri.go:89] found id: ""
	I0414 17:49:23.528673  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.528684  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:23.528692  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:23.528755  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:23.572217  213635 cri.go:89] found id: ""
	I0414 17:49:23.572245  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.572256  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:23.572263  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:23.572319  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:23.612901  213635 cri.go:89] found id: ""
	I0414 17:49:23.612922  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.612930  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:23.612936  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:23.612981  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:23.668230  213635 cri.go:89] found id: ""
	I0414 17:49:23.668256  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.668265  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:23.668271  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:23.668322  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:23.714238  213635 cri.go:89] found id: ""
	I0414 17:49:23.714265  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.714275  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:23.714282  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:23.714331  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:23.763817  213635 cri.go:89] found id: ""
	I0414 17:49:23.763853  213635 logs.go:282] 0 containers: []
	W0414 17:49:23.763863  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:23.763872  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:23.763884  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:23.836441  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:23.836486  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:23.861896  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:23.861940  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:23.944757  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:23.944787  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:23.944806  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:24.029884  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:24.029923  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:26.571950  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:26.585122  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:26.585180  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:26.623368  213635 cri.go:89] found id: ""
	I0414 17:49:26.623392  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.623401  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:26.623409  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:26.623463  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:26.657588  213635 cri.go:89] found id: ""
	I0414 17:49:26.657624  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.657635  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:26.657642  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:26.657699  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:26.690827  213635 cri.go:89] found id: ""
	I0414 17:49:26.690854  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.690862  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:26.690867  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:26.690916  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:26.732830  213635 cri.go:89] found id: ""
	I0414 17:49:26.732866  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.732876  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:26.732883  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:26.732946  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:26.767719  213635 cri.go:89] found id: ""
	I0414 17:49:26.767770  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.767783  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:26.767793  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:26.767861  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:26.805504  213635 cri.go:89] found id: ""
	I0414 17:49:26.805531  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.805540  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:26.805547  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:26.805607  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:26.848736  213635 cri.go:89] found id: ""
	I0414 17:49:26.848761  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.848769  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:26.848774  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:26.848831  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:26.888964  213635 cri.go:89] found id: ""
	I0414 17:49:26.888996  213635 logs.go:282] 0 containers: []
	W0414 17:49:26.889006  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:26.889017  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:26.889030  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:26.902789  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:26.902819  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:26.984479  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:26.984503  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:26.984516  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:27.072453  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:27.072491  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:27.114247  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:27.114282  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:25.282623  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:27.781278  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:28.019381  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:49:28.019465  212269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:49:28.019483  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.019407  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.019634  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.019797  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.019918  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.020024  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.020513  212269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:49:28.020530  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:49:28.020546  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.024119  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.024370  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.024926  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.024940  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.024945  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.025142  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.025307  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.025318  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.025337  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.025447  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.025773  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.025953  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.026140  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.026298  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.028168  212269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0414 17:49:28.028575  212269 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:49:28.028954  212269 main.go:141] libmachine: Using API Version  1
	I0414 17:49:28.028977  212269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:49:28.029414  212269 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:49:28.029592  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetState
	I0414 17:49:28.031192  212269 main.go:141] libmachine: (no-preload-721806) Calling .DriverName
	I0414 17:49:28.031456  212269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:49:28.031470  212269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:49:28.031486  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHHostname
	I0414 17:49:28.034539  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.034997  212269 main.go:141] libmachine: (no-preload-721806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:f0:13", ip: ""} in network mk-no-preload-721806: {Iface:virbr1 ExpiryTime:2025-04-14 18:43:22 +0000 UTC Type:0 Mac:52:54:00:96:f0:13 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:no-preload-721806 Clientid:01:52:54:00:96:f0:13}
	I0414 17:49:28.035014  212269 main.go:141] libmachine: (no-preload-721806) DBG | domain no-preload-721806 has defined IP address 192.168.39.89 and MAC address 52:54:00:96:f0:13 in network mk-no-preload-721806
	I0414 17:49:28.035149  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHPort
	I0414 17:49:28.035305  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHKeyPath
	I0414 17:49:28.035463  212269 main.go:141] libmachine: (no-preload-721806) Calling .GetSSHUsername
	I0414 17:49:28.035588  212269 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/no-preload-721806/id_rsa Username:docker}
	I0414 17:49:28.215025  212269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:49:28.277431  212269 node_ready.go:35] waiting up to 6m0s for node "no-preload-721806" to be "Ready" ...
	I0414 17:49:28.311336  212269 node_ready.go:49] node "no-preload-721806" has status "Ready":"True"
	I0414 17:49:28.311360  212269 node_ready.go:38] duration metric: took 33.901113ms for node "no-preload-721806" to be "Ready" ...
	I0414 17:49:28.311374  212269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:28.317467  212269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:28.374855  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:49:28.390490  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:49:28.390513  212269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:49:28.406595  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:49:28.437361  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:49:28.437392  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:49:28.469744  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:49:28.469782  212269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:49:28.521154  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:49:28.521179  212269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:49:28.548853  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:49:28.548878  212269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:49:28.614511  212269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:49:28.614541  212269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:49:28.649638  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:49:28.649661  212269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:49:28.703339  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:49:28.777954  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:49:28.777987  212269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:49:28.845025  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.845054  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.845362  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.845380  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.845392  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.845399  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.845652  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.845672  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.858160  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:28.858179  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:28.858491  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:28.858514  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:28.858515  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:28.893505  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:49:28.893539  212269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:49:28.960993  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:49:28.961020  212269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:49:29.067780  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:49:29.067815  212269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:49:29.129670  212269 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:49:29.129698  212269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:49:29.201772  212269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:49:29.598669  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.192034026s)
	I0414 17:49:29.598739  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:29.598752  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:29.599101  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:29.599101  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:29.599154  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:29.599177  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:29.599191  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:29.599468  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:29.599477  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:29.599505  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.044475  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341048776s)
	I0414 17:49:30.044551  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:30.044569  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:30.044858  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:30.044874  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.044884  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:30.044891  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:30.045277  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:30.045289  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:30.045341  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:30.045355  212269 addons.go:479] Verifying addon metrics-server=true in "no-preload-721806"
	I0414 17:49:30.329870  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:31.062251  212269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.860435662s)
	I0414 17:49:31.062298  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:31.062312  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:31.062629  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:31.062652  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:31.062662  212269 main.go:141] libmachine: Making call to close driver server
	I0414 17:49:31.062670  212269 main.go:141] libmachine: (no-preload-721806) Calling .Close
	I0414 17:49:31.062906  212269 main.go:141] libmachine: (no-preload-721806) DBG | Closing plugin on server side
	I0414 17:49:31.062951  212269 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:49:31.062964  212269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:49:31.064362  212269 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-721806 addons enable metrics-server
	
	I0414 17:49:31.065558  212269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 17:49:29.668064  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:29.685205  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:29.685289  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:29.729725  213635 cri.go:89] found id: ""
	I0414 17:49:29.729753  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.729760  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:29.729766  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:29.729823  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:29.788536  213635 cri.go:89] found id: ""
	I0414 17:49:29.788569  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.788581  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:29.788588  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:29.788656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:29.832032  213635 cri.go:89] found id: ""
	I0414 17:49:29.832060  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.832069  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:29.832074  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:29.832123  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:29.864981  213635 cri.go:89] found id: ""
	I0414 17:49:29.865009  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.865019  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:29.865025  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:29.865091  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:29.901024  213635 cri.go:89] found id: ""
	I0414 17:49:29.901060  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.901071  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:29.901079  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:29.901149  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:29.938790  213635 cri.go:89] found id: ""
	I0414 17:49:29.938820  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.938832  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:29.938840  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:29.938912  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:29.981414  213635 cri.go:89] found id: ""
	I0414 17:49:29.981445  213635 logs.go:282] 0 containers: []
	W0414 17:49:29.981456  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:29.981463  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:29.981526  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:30.022510  213635 cri.go:89] found id: ""
	I0414 17:49:30.022545  213635 logs.go:282] 0 containers: []
	W0414 17:49:30.022558  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:30.022571  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:30.022588  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:30.077221  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:30.077255  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:30.091513  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:30.091552  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:30.164964  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:30.164991  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:30.165004  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:30.246281  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:30.246321  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:32.807018  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:32.825456  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:32.825531  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:32.864079  213635 cri.go:89] found id: ""
	I0414 17:49:32.864116  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.864126  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:32.864133  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:32.864191  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:32.905763  213635 cri.go:89] found id: ""
	I0414 17:49:32.905792  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.905806  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:32.905813  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:32.905894  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:32.944126  213635 cri.go:89] found id: ""
	I0414 17:49:32.944167  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.944186  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:32.944195  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:32.944258  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:32.983511  213635 cri.go:89] found id: ""
	I0414 17:49:32.983549  213635 logs.go:282] 0 containers: []
	W0414 17:49:32.983562  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:32.983571  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:32.983629  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:33.021383  213635 cri.go:89] found id: ""
	I0414 17:49:33.021411  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.021422  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:33.021429  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:33.021488  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:33.058181  213635 cri.go:89] found id: ""
	I0414 17:49:33.058214  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.058225  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:33.058233  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:33.058296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:33.094426  213635 cri.go:89] found id: ""
	I0414 17:49:33.094459  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.094470  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:33.094479  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:33.094537  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:33.139392  213635 cri.go:89] found id: ""
	I0414 17:49:33.139430  213635 logs.go:282] 0 containers: []
	W0414 17:49:33.139443  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:33.139455  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:33.139471  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:33.218814  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:33.218842  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:33.218860  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:29.783892  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:32.282499  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:31.066728  212269 addons.go:514] duration metric: took 3.098264633s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 17:49:32.824809  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:35.323008  212269 pod_ready.go:103] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:33.325637  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:33.325678  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:33.363443  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:33.363473  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:33.427131  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:33.427167  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:35.942712  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:35.957936  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:35.958027  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:35.998316  213635 cri.go:89] found id: ""
	I0414 17:49:35.998343  213635 logs.go:282] 0 containers: []
	W0414 17:49:35.998354  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:35.998361  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:35.998419  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:36.032107  213635 cri.go:89] found id: ""
	I0414 17:49:36.032139  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.032149  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:36.032156  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:36.032211  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:36.070010  213635 cri.go:89] found id: ""
	I0414 17:49:36.070035  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.070043  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:36.070049  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:36.070104  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:36.105914  213635 cri.go:89] found id: ""
	I0414 17:49:36.105944  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.105962  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:36.105970  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:36.106036  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:36.140378  213635 cri.go:89] found id: ""
	I0414 17:49:36.140406  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.140418  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:36.140425  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:36.140487  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:36.178535  213635 cri.go:89] found id: ""
	I0414 17:49:36.178564  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.178575  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:36.178583  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:36.178652  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:36.217284  213635 cri.go:89] found id: ""
	I0414 17:49:36.217314  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.217324  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:36.217330  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:36.217391  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:36.251770  213635 cri.go:89] found id: ""
	I0414 17:49:36.251805  213635 logs.go:282] 0 containers: []
	W0414 17:49:36.251818  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:36.251835  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:36.251850  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:36.322858  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:36.322906  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:36.337902  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:36.337939  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:36.415729  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:36.415752  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:36.415767  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:36.512960  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:36.513000  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:36.827356  212269 pod_ready.go:93] pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.827377  212269 pod_ready.go:82] duration metric: took 8.509888872s for pod "coredns-668d6bf9bc-6cjwn" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.827386  212269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.869474  212269 pod_ready.go:93] pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.869506  212269 pod_ready.go:82] duration metric: took 42.1117ms for pod "coredns-668d6bf9bc-bng87" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.869522  212269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.896002  212269 pod_ready.go:93] pod "etcd-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.896034  212269 pod_ready.go:82] duration metric: took 26.503053ms for pod "etcd-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.896046  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.910284  212269 pod_ready.go:93] pod "kube-apiserver-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.910332  212269 pod_ready.go:82] duration metric: took 14.277535ms for pod "kube-apiserver-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.910360  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.917658  212269 pod_ready.go:93] pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:36.917678  212269 pod_ready.go:82] duration metric: took 7.305319ms for pod "kube-controller-manager-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:36.917689  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tktgt" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.227025  212269 pod_ready.go:93] pod "kube-proxy-tktgt" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:37.227047  212269 pod_ready.go:82] duration metric: took 309.350302ms for pod "kube-proxy-tktgt" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.227056  212269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.621871  212269 pod_ready.go:93] pod "kube-scheduler-no-preload-721806" in "kube-system" namespace has status "Ready":"True"
	I0414 17:49:37.621901  212269 pod_ready.go:82] duration metric: took 394.836681ms for pod "kube-scheduler-no-preload-721806" in "kube-system" namespace to be "Ready" ...
	I0414 17:49:37.621909  212269 pod_ready.go:39] duration metric: took 9.310525251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:49:37.621924  212269 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:49:37.621974  212269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:37.660143  212269 api_server.go:72] duration metric: took 9.691771257s to wait for apiserver process to appear ...
	I0414 17:49:37.660171  212269 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:49:37.660193  212269 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I0414 17:49:37.665313  212269 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I0414 17:49:37.666371  212269 api_server.go:141] control plane version: v1.32.2
	I0414 17:49:37.666390  212269 api_server.go:131] duration metric: took 6.212109ms to wait for apiserver health ...
	I0414 17:49:37.666397  212269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:49:37.823477  212269 system_pods.go:59] 9 kube-system pods found
	I0414 17:49:37.823504  212269 system_pods.go:61] "coredns-668d6bf9bc-6cjwn" [3fb5680f-8bc6-4d35-abbf-19108c2242d3] Running
	I0414 17:49:37.823509  212269 system_pods.go:61] "coredns-668d6bf9bc-bng87" [0ae7cd1a-9760-43aa-b0ac-9f66c7e505d2] Running
	I0414 17:49:37.823513  212269 system_pods.go:61] "etcd-no-preload-721806" [6f30ffea-8f3a-4e21-9fd6-c9366bb997e2] Running
	I0414 17:49:37.823516  212269 system_pods.go:61] "kube-apiserver-no-preload-721806" [bc7d4172-ee21-4d53-a4a6-9bb7272d8b24] Running
	I0414 17:49:37.823521  212269 system_pods.go:61] "kube-controller-manager-no-preload-721806" [346266a0-a376-466c-9ebb-46772557740b] Running
	I0414 17:49:37.823525  212269 system_pods.go:61] "kube-proxy-tktgt" [984a1b9b-3c51-45d0-86bd-3ca64d1b3af8] Running
	I0414 17:49:37.823529  212269 system_pods.go:61] "kube-scheduler-no-preload-721806" [2294ad27-ffc4-4181-9bef-f865956252ac] Running
	I0414 17:49:37.823537  212269 system_pods.go:61] "metrics-server-f79f97bbb-f99gx" [c2d0b638-6f0e-41d7-b4e3-4e0f5a619c86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:37.823547  212269 system_pods.go:61] "storage-provisioner" [463e19f1-b7aa-46ff-b5c7-99e1207bff9e] Running
	I0414 17:49:37.823561  212269 system_pods.go:74] duration metric: took 157.157807ms to wait for pod list to return data ...
	I0414 17:49:37.823571  212269 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:49:38.021598  212269 default_sa.go:45] found service account: "default"
	I0414 17:49:38.021626  212269 default_sa.go:55] duration metric: took 198.045961ms for default service account to be created ...
	I0414 17:49:38.021642  212269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:49:38.222171  212269 system_pods.go:86] 9 kube-system pods found
	I0414 17:49:38.222205  212269 system_pods.go:89] "coredns-668d6bf9bc-6cjwn" [3fb5680f-8bc6-4d35-abbf-19108c2242d3] Running
	I0414 17:49:38.222210  212269 system_pods.go:89] "coredns-668d6bf9bc-bng87" [0ae7cd1a-9760-43aa-b0ac-9f66c7e505d2] Running
	I0414 17:49:38.222214  212269 system_pods.go:89] "etcd-no-preload-721806" [6f30ffea-8f3a-4e21-9fd6-c9366bb997e2] Running
	I0414 17:49:38.222217  212269 system_pods.go:89] "kube-apiserver-no-preload-721806" [bc7d4172-ee21-4d53-a4a6-9bb7272d8b24] Running
	I0414 17:49:38.222220  212269 system_pods.go:89] "kube-controller-manager-no-preload-721806" [346266a0-a376-466c-9ebb-46772557740b] Running
	I0414 17:49:38.222224  212269 system_pods.go:89] "kube-proxy-tktgt" [984a1b9b-3c51-45d0-86bd-3ca64d1b3af8] Running
	I0414 17:49:38.222228  212269 system_pods.go:89] "kube-scheduler-no-preload-721806" [2294ad27-ffc4-4181-9bef-f865956252ac] Running
	I0414 17:49:38.222233  212269 system_pods.go:89] "metrics-server-f79f97bbb-f99gx" [c2d0b638-6f0e-41d7-b4e3-4e0f5a619c86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:49:38.222237  212269 system_pods.go:89] "storage-provisioner" [463e19f1-b7aa-46ff-b5c7-99e1207bff9e] Running
	I0414 17:49:38.222247  212269 system_pods.go:126] duration metric: took 200.597392ms to wait for k8s-apps to be running ...
	I0414 17:49:38.222257  212269 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:49:38.222316  212269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:49:38.258014  212269 system_svc.go:56] duration metric: took 35.747059ms WaitForService to wait for kubelet
	I0414 17:49:38.258046  212269 kubeadm.go:582] duration metric: took 10.289680192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:49:38.258069  212269 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:49:38.422770  212269 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:49:38.422805  212269 node_conditions.go:123] node cpu capacity is 2
	I0414 17:49:38.422833  212269 node_conditions.go:105] duration metric: took 164.757743ms to run NodePressure ...
	I0414 17:49:38.422848  212269 start.go:241] waiting for startup goroutines ...
	I0414 17:49:38.422858  212269 start.go:246] waiting for cluster config update ...
	I0414 17:49:38.422873  212269 start.go:255] writing updated cluster config ...
	I0414 17:49:38.423253  212269 ssh_runner.go:195] Run: rm -f paused
	I0414 17:49:38.493521  212269 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:49:38.495382  212269 out.go:177] * Done! kubectl is now configured to use "no-preload-721806" cluster and "default" namespace by default
	I0414 17:49:34.781757  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:36.781990  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:39.053905  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:39.068768  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:39.068841  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:39.104418  213635 cri.go:89] found id: ""
	I0414 17:49:39.104446  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.104454  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:39.104460  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:39.104520  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:39.144556  213635 cri.go:89] found id: ""
	I0414 17:49:39.144587  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.144598  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:39.144605  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:39.144673  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:39.184890  213635 cri.go:89] found id: ""
	I0414 17:49:39.184923  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.184936  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:39.184946  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:39.185018  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:39.224321  213635 cri.go:89] found id: ""
	I0414 17:49:39.224353  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.224364  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:39.224372  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:39.224431  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:39.275363  213635 cri.go:89] found id: ""
	I0414 17:49:39.275393  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.275403  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:39.275411  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:39.275469  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:39.324682  213635 cri.go:89] found id: ""
	I0414 17:49:39.324715  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.324725  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:39.324733  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:39.324788  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:39.356862  213635 cri.go:89] found id: ""
	I0414 17:49:39.356891  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.356901  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:39.356908  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:39.356970  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:39.392157  213635 cri.go:89] found id: ""
	I0414 17:49:39.392186  213635 logs.go:282] 0 containers: []
	W0414 17:49:39.392197  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:39.392208  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:39.392223  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:39.484945  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:39.484971  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:39.484989  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:39.564891  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:39.564927  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:39.608513  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:39.608543  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:39.672726  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:39.672760  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:42.189948  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:42.203489  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:42.203560  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:42.243021  213635 cri.go:89] found id: ""
	I0414 17:49:42.243047  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.243057  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:42.243064  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:42.243152  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:42.285782  213635 cri.go:89] found id: ""
	I0414 17:49:42.285807  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.285817  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:42.285824  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:42.285898  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:42.318326  213635 cri.go:89] found id: ""
	I0414 17:49:42.318350  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.318360  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:42.318367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:42.318421  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:42.351765  213635 cri.go:89] found id: ""
	I0414 17:49:42.351788  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.351795  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:42.351802  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:42.351862  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:42.382539  213635 cri.go:89] found id: ""
	I0414 17:49:42.382564  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.382574  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:42.382582  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:42.382639  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:42.416009  213635 cri.go:89] found id: ""
	I0414 17:49:42.416034  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.416044  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:42.416051  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:42.416107  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:42.447820  213635 cri.go:89] found id: ""
	I0414 17:49:42.447860  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.447871  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:42.447879  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:42.447941  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:42.486157  213635 cri.go:89] found id: ""
	I0414 17:49:42.486179  213635 logs.go:282] 0 containers: []
	W0414 17:49:42.486186  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:42.486195  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:42.486210  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:42.556937  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:42.556963  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:42.556980  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:42.636537  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:42.636569  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:42.676688  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:42.676717  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:42.728391  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:42.728421  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:38.783981  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:41.281841  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:43.282020  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:45.242452  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:45.256486  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:45.256558  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:45.291454  213635 cri.go:89] found id: ""
	I0414 17:49:45.291482  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.291490  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:45.291497  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:45.291552  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:45.328550  213635 cri.go:89] found id: ""
	I0414 17:49:45.328573  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.328583  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:45.328591  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:45.328638  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:45.365121  213635 cri.go:89] found id: ""
	I0414 17:49:45.365148  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.365155  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:45.365161  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:45.365216  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:45.402479  213635 cri.go:89] found id: ""
	I0414 17:49:45.402508  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.402519  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:45.402527  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:45.402580  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:45.433123  213635 cri.go:89] found id: ""
	I0414 17:49:45.433147  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.433155  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:45.433160  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:45.433206  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:45.466351  213635 cri.go:89] found id: ""
	I0414 17:49:45.466376  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.466383  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:45.466390  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:45.466442  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:45.498745  213635 cri.go:89] found id: ""
	I0414 17:49:45.498774  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.498785  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:45.498792  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:45.498866  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:45.531870  213635 cri.go:89] found id: ""
	I0414 17:49:45.531898  213635 logs.go:282] 0 containers: []
	W0414 17:49:45.531908  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:45.531919  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:45.531937  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:45.582230  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:45.582257  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:45.597164  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:45.597197  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:45.666569  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:45.666598  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:45.666616  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:45.746036  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:45.746068  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:45.782620  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:48.280928  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:48.284590  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:48.297947  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:48.298019  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:48.331443  213635 cri.go:89] found id: ""
	I0414 17:49:48.331469  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.331480  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:48.331487  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:48.331534  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:48.364569  213635 cri.go:89] found id: ""
	I0414 17:49:48.364602  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.364613  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:48.364620  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:48.364683  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:48.398063  213635 cri.go:89] found id: ""
	I0414 17:49:48.398097  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.398109  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:48.398118  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:48.398182  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:48.430783  213635 cri.go:89] found id: ""
	I0414 17:49:48.430808  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.430829  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:48.430837  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:48.430924  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:48.466378  213635 cri.go:89] found id: ""
	I0414 17:49:48.466410  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.466423  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:48.466432  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:48.466656  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:48.499766  213635 cri.go:89] found id: ""
	I0414 17:49:48.499819  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.499829  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:48.499837  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:48.499901  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:48.533192  213635 cri.go:89] found id: ""
	I0414 17:49:48.533218  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.533228  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:48.533235  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:48.533294  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:48.565138  213635 cri.go:89] found id: ""
	I0414 17:49:48.565159  213635 logs.go:282] 0 containers: []
	W0414 17:49:48.565167  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:48.565174  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:48.565183  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:48.616578  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:48.616609  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:48.630209  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:48.630232  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:48.697158  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:48.697184  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:48.697196  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:48.777141  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:48.777177  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:51.322807  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:51.336971  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:51.337037  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:51.373592  213635 cri.go:89] found id: ""
	I0414 17:49:51.373616  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.373623  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:51.373628  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:51.373675  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:51.410753  213635 cri.go:89] found id: ""
	I0414 17:49:51.410782  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.410791  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:51.410796  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:51.410846  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:51.443612  213635 cri.go:89] found id: ""
	I0414 17:49:51.443639  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.443650  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:51.443656  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:51.443717  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:51.476956  213635 cri.go:89] found id: ""
	I0414 17:49:51.476982  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.476990  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:51.476995  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:51.477041  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:51.512295  213635 cri.go:89] found id: ""
	I0414 17:49:51.512330  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.512349  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:51.512357  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:51.512420  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:51.553410  213635 cri.go:89] found id: ""
	I0414 17:49:51.553437  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.553445  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:51.553451  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:51.553514  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:51.593165  213635 cri.go:89] found id: ""
	I0414 17:49:51.593196  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.593205  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:51.593210  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:51.593259  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:51.634382  213635 cri.go:89] found id: ""
	I0414 17:49:51.634425  213635 logs.go:282] 0 containers: []
	W0414 17:49:51.634436  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:51.634446  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:51.634457  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:51.687688  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:51.687725  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:51.703569  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:51.703600  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:51.775371  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:51.775398  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:51.775414  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:51.851890  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:51.851936  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:50.282042  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:52.782200  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:54.389539  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:54.403233  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:54.403293  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:54.447655  213635 cri.go:89] found id: ""
	I0414 17:49:54.447675  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.447683  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:54.447690  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:54.447736  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:54.486882  213635 cri.go:89] found id: ""
	I0414 17:49:54.486905  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.486912  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:54.486917  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:54.486977  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:54.519544  213635 cri.go:89] found id: ""
	I0414 17:49:54.519570  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.519581  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:54.519588  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:54.519643  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:54.558646  213635 cri.go:89] found id: ""
	I0414 17:49:54.558671  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.558681  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:54.558689  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:54.558735  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:54.600650  213635 cri.go:89] found id: ""
	I0414 17:49:54.600674  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.600680  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:54.600685  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:54.600732  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:54.641206  213635 cri.go:89] found id: ""
	I0414 17:49:54.641231  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.641240  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:54.641247  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:54.641302  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:54.680671  213635 cri.go:89] found id: ""
	I0414 17:49:54.680698  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.680708  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:54.680715  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:54.680765  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:54.721028  213635 cri.go:89] found id: ""
	I0414 17:49:54.721050  213635 logs.go:282] 0 containers: []
	W0414 17:49:54.721056  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:54.721066  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:54.721076  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:54.769755  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:54.769782  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:54.785252  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:54.785273  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:54.855288  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:54.855308  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:54.855322  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:54.952695  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:54.952735  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:57.499933  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:49:57.514593  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:49:57.514658  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:49:57.549526  213635 cri.go:89] found id: ""
	I0414 17:49:57.549550  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.549558  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:49:57.549564  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:49:57.549610  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:49:57.582596  213635 cri.go:89] found id: ""
	I0414 17:49:57.582626  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.582637  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:49:57.582643  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:49:57.582695  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:49:57.622214  213635 cri.go:89] found id: ""
	I0414 17:49:57.622244  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.622252  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:49:57.622257  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:49:57.622313  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:49:57.655388  213635 cri.go:89] found id: ""
	I0414 17:49:57.655415  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.655422  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:49:57.655428  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:49:57.655474  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:49:57.692324  213635 cri.go:89] found id: ""
	I0414 17:49:57.692349  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.692357  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:49:57.692362  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:49:57.692407  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:49:57.725614  213635 cri.go:89] found id: ""
	I0414 17:49:57.725637  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.725644  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:49:57.725650  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:49:57.725700  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:49:57.757747  213635 cri.go:89] found id: ""
	I0414 17:49:57.757779  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.757788  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:49:57.757794  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:49:57.757868  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:49:57.791614  213635 cri.go:89] found id: ""
	I0414 17:49:57.791651  213635 logs.go:282] 0 containers: []
	W0414 17:49:57.791658  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:49:57.791666  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:49:57.791676  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:49:57.839950  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:49:57.839983  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:49:57.852850  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:49:57.852877  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:49:57.925310  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:49:57.925338  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:49:57.925355  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:49:58.008445  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:49:58.008484  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:54.783081  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:49:57.282711  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:00.550402  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:00.564239  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:00.564296  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:00.598410  213635 cri.go:89] found id: ""
	I0414 17:50:00.598439  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.598447  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:00.598452  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:00.598500  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:00.629470  213635 cri.go:89] found id: ""
	I0414 17:50:00.629489  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.629497  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:00.629502  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:00.629547  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:00.660663  213635 cri.go:89] found id: ""
	I0414 17:50:00.660686  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.660695  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:00.660703  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:00.660780  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:00.703422  213635 cri.go:89] found id: ""
	I0414 17:50:00.703450  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.703461  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:00.703467  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:00.703524  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:00.736355  213635 cri.go:89] found id: ""
	I0414 17:50:00.736378  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.736388  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:00.736394  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:00.736447  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:00.771432  213635 cri.go:89] found id: ""
	I0414 17:50:00.771460  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.771470  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:00.771478  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:00.771544  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:00.804453  213635 cri.go:89] found id: ""
	I0414 17:50:00.804474  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.804483  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:00.804490  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:00.804550  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:00.840934  213635 cri.go:89] found id: ""
	I0414 17:50:00.840962  213635 logs.go:282] 0 containers: []
	W0414 17:50:00.840971  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:00.840982  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:00.840994  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:00.888813  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:00.888846  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:00.901168  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:00.901188  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:00.970608  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:00.970638  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:00.970655  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:01.054190  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:01.054225  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:49:59.781167  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:01.783383  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:03.592930  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:03.607476  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:03.607542  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:03.647536  213635 cri.go:89] found id: ""
	I0414 17:50:03.647559  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.647567  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:03.647572  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:03.647616  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:03.687053  213635 cri.go:89] found id: ""
	I0414 17:50:03.687078  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.687086  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:03.687092  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:03.687135  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:03.724232  213635 cri.go:89] found id: ""
	I0414 17:50:03.724258  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.724268  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:03.724276  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:03.724327  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:03.758621  213635 cri.go:89] found id: ""
	I0414 17:50:03.758650  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.758661  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:03.758668  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:03.758735  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:03.792524  213635 cri.go:89] found id: ""
	I0414 17:50:03.792553  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.792563  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:03.792570  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:03.792623  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:03.823533  213635 cri.go:89] found id: ""
	I0414 17:50:03.823562  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.823569  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:03.823575  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:03.823619  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:03.855038  213635 cri.go:89] found id: ""
	I0414 17:50:03.855060  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.855067  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:03.855072  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:03.855122  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:03.886260  213635 cri.go:89] found id: ""
	I0414 17:50:03.886288  213635 logs.go:282] 0 containers: []
	W0414 17:50:03.886296  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:03.886304  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:03.886314  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:03.935750  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:03.935780  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:03.948571  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:03.948599  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:04.016600  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:04.016625  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:04.016641  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:04.095247  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:04.095278  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:06.633583  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:06.647292  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:50:06.647371  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:50:06.680994  213635 cri.go:89] found id: ""
	I0414 17:50:06.681023  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.681031  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:50:06.681036  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:50:06.681093  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:50:06.715235  213635 cri.go:89] found id: ""
	I0414 17:50:06.715262  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.715269  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:50:06.715275  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:50:06.715333  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:50:06.750320  213635 cri.go:89] found id: ""
	I0414 17:50:06.750349  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.750359  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:50:06.750367  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:50:06.750425  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:50:06.781634  213635 cri.go:89] found id: ""
	I0414 17:50:06.781657  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.781666  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:50:06.781673  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:50:06.781731  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:50:06.812684  213635 cri.go:89] found id: ""
	I0414 17:50:06.812709  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.812719  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:50:06.812727  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:50:06.812785  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:50:06.843417  213635 cri.go:89] found id: ""
	I0414 17:50:06.843447  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.843458  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:50:06.843466  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:50:06.843519  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:50:06.878915  213635 cri.go:89] found id: ""
	I0414 17:50:06.878943  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.878952  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:50:06.878958  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:50:06.879018  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:50:06.911647  213635 cri.go:89] found id: ""
	I0414 17:50:06.911670  213635 logs.go:282] 0 containers: []
	W0414 17:50:06.911680  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:50:06.911705  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:50:06.911720  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:50:06.977253  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:50:06.977286  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:50:06.977304  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:50:07.056442  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:50:07.056475  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:50:07.104053  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:50:07.104082  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 17:50:07.153444  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:50:07.153483  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:50:04.281983  213406 pod_ready.go:103] pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:04.776666  213406 pod_ready.go:82] duration metric: took 4m0.000384507s for pod "metrics-server-f79f97bbb-9vnsg" in "kube-system" namespace to be "Ready" ...
	E0414 17:50:04.776701  213406 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0414 17:50:04.776719  213406 pod_ready.go:39] duration metric: took 4m12.533820908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:04.776753  213406 kubeadm.go:597] duration metric: took 4m20.355244776s to restartPrimaryControlPlane
	W0414 17:50:04.776834  213406 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:50:04.776879  213406 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:50:09.667392  213635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:09.680695  213635 kubeadm.go:597] duration metric: took 4m3.288338716s to restartPrimaryControlPlane
	W0414 17:50:09.680757  213635 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 17:50:09.680787  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:50:15.123013  213635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.442204913s)
	I0414 17:50:15.123098  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:15.137541  213635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:50:15.147676  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:50:15.157224  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:50:15.157238  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:50:15.157273  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:50:15.166484  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:50:15.166525  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:50:15.175831  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:50:15.184692  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:50:15.184731  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:50:15.193871  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:50:15.202947  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:50:15.202993  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:50:15.212451  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:50:15.221477  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:50:15.221512  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:50:15.231277  213635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:50:15.294259  213635 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:50:15.294330  213635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:50:15.422321  213635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:50:15.422476  213635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:50:15.422622  213635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:50:15.596146  213635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:50:15.598667  213635 out.go:235]   - Generating certificates and keys ...
	I0414 17:50:15.598769  213635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:50:15.598859  213635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:50:15.598976  213635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:50:15.599034  213635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:50:15.599148  213635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:50:15.599238  213635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:50:15.599301  213635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:50:15.599353  213635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:50:15.599416  213635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:50:15.599514  213635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:50:15.599573  213635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:50:15.599654  213635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:50:15.664653  213635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:50:15.743669  213635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:50:15.813965  213635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:50:16.089174  213635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:50:16.103702  213635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:50:16.104792  213635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:50:16.104884  213635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:50:16.250169  213635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:50:16.252518  213635 out.go:235]   - Booting up control plane ...
	I0414 17:50:16.252640  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:50:16.262331  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:50:16.263648  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:50:16.264988  213635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:50:16.267648  213635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:50:32.538099  213406 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.761187529s)
	I0414 17:50:32.538165  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:32.553667  213406 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 17:50:32.563284  213406 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:50:32.572633  213406 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:50:32.572650  213406 kubeadm.go:157] found existing configuration files:
	
	I0414 17:50:32.572699  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:50:32.581936  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:50:32.581989  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:50:32.592144  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:50:32.600756  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:50:32.600806  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:50:32.610243  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:50:32.619999  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:50:32.620046  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:50:32.629791  213406 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:50:32.639153  213406 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:50:32.639192  213406 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:50:32.648625  213406 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:50:32.799107  213406 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:50:40.718968  213406 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 17:50:40.719047  213406 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:50:40.719195  213406 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:50:40.719284  213406 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:50:40.719402  213406 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 17:50:40.719495  213406 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:50:40.720874  213406 out.go:235]   - Generating certificates and keys ...
	I0414 17:50:40.720969  213406 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:50:40.721050  213406 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:50:40.721133  213406 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:50:40.721193  213406 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:50:40.721253  213406 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:50:40.721300  213406 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:50:40.721375  213406 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:50:40.721457  213406 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:50:40.721523  213406 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:50:40.721588  213406 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:50:40.721623  213406 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:50:40.721690  213406 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:50:40.721773  213406 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:50:40.721867  213406 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 17:50:40.721954  213406 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:50:40.722064  213406 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:50:40.722157  213406 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:50:40.722264  213406 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:50:40.722356  213406 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:50:40.724310  213406 out.go:235]   - Booting up control plane ...
	I0414 17:50:40.724425  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:50:40.724523  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:50:40.724621  213406 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:50:40.724763  213406 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:50:40.724890  213406 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:50:40.724962  213406 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:50:40.725139  213406 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 17:50:40.725268  213406 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 17:50:40.725360  213406 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000971318s
	I0414 17:50:40.725463  213406 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 17:50:40.725555  213406 kubeadm.go:310] [api-check] The API server is healthy after 4.502714129s
	I0414 17:50:40.725689  213406 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 17:50:40.725884  213406 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 17:50:40.725975  213406 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 17:50:40.726178  213406 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-418468 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 17:50:40.726245  213406 kubeadm.go:310] [bootstrap-token] Using token: 2kykq2.rhxxbbskj81go9zq
	I0414 17:50:40.727271  213406 out.go:235]   - Configuring RBAC rules ...
	I0414 17:50:40.727362  213406 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 17:50:40.727452  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 17:50:40.727612  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 17:50:40.727733  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 17:50:40.727879  213406 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 17:50:40.728009  213406 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 17:50:40.728182  213406 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 17:50:40.728252  213406 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 17:50:40.728308  213406 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 17:50:40.728315  213406 kubeadm.go:310] 
	I0414 17:50:40.728365  213406 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 17:50:40.728374  213406 kubeadm.go:310] 
	I0414 17:50:40.728444  213406 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 17:50:40.728450  213406 kubeadm.go:310] 
	I0414 17:50:40.728487  213406 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 17:50:40.728568  213406 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 17:50:40.728654  213406 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 17:50:40.728663  213406 kubeadm.go:310] 
	I0414 17:50:40.728744  213406 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 17:50:40.728753  213406 kubeadm.go:310] 
	I0414 17:50:40.728829  213406 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 17:50:40.728841  213406 kubeadm.go:310] 
	I0414 17:50:40.728888  213406 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 17:50:40.728953  213406 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 17:50:40.729011  213406 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 17:50:40.729017  213406 kubeadm.go:310] 
	I0414 17:50:40.729090  213406 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 17:50:40.729163  213406 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 17:50:40.729169  213406 kubeadm.go:310] 
	I0414 17:50:40.729277  213406 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2kykq2.rhxxbbskj81go9zq \
	I0414 17:50:40.729434  213406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
	I0414 17:50:40.729480  213406 kubeadm.go:310] 	--control-plane 
	I0414 17:50:40.729489  213406 kubeadm.go:310] 
	I0414 17:50:40.729585  213406 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 17:50:40.729599  213406 kubeadm.go:310] 
	I0414 17:50:40.729712  213406 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2kykq2.rhxxbbskj81go9zq \
	I0414 17:50:40.729880  213406 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d 
	I0414 17:50:40.729894  213406 cni.go:84] Creating CNI manager for ""
	I0414 17:50:40.729902  213406 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 17:50:40.731470  213406 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 17:50:40.732385  213406 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 17:50:40.744504  213406 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 17:50:40.762319  213406 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 17:50:40.762424  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:40.762443  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-418468 minikube.k8s.io/updated_at=2025_04_14T17_50_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=embed-certs-418468 minikube.k8s.io/primary=true
	I0414 17:50:40.994576  213406 ops.go:34] apiserver oom_adj: -16
	I0414 17:50:40.994598  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:41.495583  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:41.995608  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:42.494670  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:42.995490  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:43.494862  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:43.995730  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:44.495428  213406 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 17:50:44.592036  213406 kubeadm.go:1113] duration metric: took 3.829658673s to wait for elevateKubeSystemPrivileges
	I0414 17:50:44.592070  213406 kubeadm.go:394] duration metric: took 5m0.228669417s to StartCluster
	I0414 17:50:44.592092  213406 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:50:44.592185  213406 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:50:44.593289  213406 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 17:50:44.593514  213406 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 17:50:44.593648  213406 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 17:50:44.593726  213406 config.go:182] Loaded profile config "embed-certs-418468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:50:44.593753  213406 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-418468"
	I0414 17:50:44.593775  213406 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-418468"
	W0414 17:50:44.593788  213406 addons.go:247] addon storage-provisioner should already be in state true
	I0414 17:50:44.593788  213406 addons.go:69] Setting dashboard=true in profile "embed-certs-418468"
	I0414 17:50:44.593793  213406 addons.go:69] Setting metrics-server=true in profile "embed-certs-418468"
	I0414 17:50:44.593809  213406 addons.go:238] Setting addon dashboard=true in "embed-certs-418468"
	I0414 17:50:44.593818  213406 addons.go:238] Setting addon metrics-server=true in "embed-certs-418468"
	W0414 17:50:44.593840  213406 addons.go:247] addon metrics-server should already be in state true
	I0414 17:50:44.593774  213406 addons.go:69] Setting default-storageclass=true in profile "embed-certs-418468"
	I0414 17:50:44.593872  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.593881  213406 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-418468"
	W0414 17:50:44.593819  213406 addons.go:247] addon dashboard should already be in state true
	I0414 17:50:44.593841  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.593949  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.594259  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594294  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594307  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594325  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594382  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594404  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.594442  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.594407  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.595088  213406 out.go:177] * Verifying Kubernetes components...
	I0414 17:50:44.596521  213406 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 17:50:44.609533  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0414 17:50:44.609575  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0414 17:50:44.609610  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0414 17:50:44.610072  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610124  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610136  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.610594  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610614  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610724  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610728  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.610746  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610783  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.610997  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611126  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611245  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.611287  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.611566  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.611607  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.611855  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.611890  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.612974  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0414 17:50:44.613483  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.614431  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.614549  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.614940  213406 addons.go:238] Setting addon default-storageclass=true in "embed-certs-418468"
	W0414 17:50:44.614962  213406 addons.go:247] addon default-storageclass should already be in state true
	I0414 17:50:44.614990  213406 host.go:66] Checking if "embed-certs-418468" exists ...
	I0414 17:50:44.614950  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.615345  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.615388  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.615539  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.615584  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.626843  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0414 17:50:44.627427  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.627885  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.627905  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.628338  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.628542  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.629083  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0414 17:50:44.629405  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.629932  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.629948  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.630188  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0414 17:50:44.630331  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.630425  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.630488  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.630767  213406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:50:44.630792  213406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:50:44.630993  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.631008  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.631289  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.631482  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.632157  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0414 17:50:44.632324  213406 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 17:50:44.632525  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.633136  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.633159  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.633372  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.633566  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.633657  213406 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:50:44.633675  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 17:50:44.633693  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.633762  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.634840  213406 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0414 17:50:44.635923  213406 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0414 17:50:44.636145  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.636955  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0414 17:50:44.636970  213406 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0414 17:50:44.636984  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.637272  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.637551  213406 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0414 17:50:44.637668  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.637698  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.637892  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.638053  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.638220  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.638412  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.638614  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 17:50:44.638627  213406 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 17:50:44.638642  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.640489  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.640921  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.640999  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.641118  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.641252  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.641353  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.641461  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.641481  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.641837  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.641860  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.642029  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.642195  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.642338  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.642468  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.649470  213406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0414 17:50:44.649885  213406 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:50:44.650319  213406 main.go:141] libmachine: Using API Version  1
	I0414 17:50:44.650332  213406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:50:44.650688  213406 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:50:44.650862  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetState
	I0414 17:50:44.652217  213406 main.go:141] libmachine: (embed-certs-418468) Calling .DriverName
	I0414 17:50:44.652408  213406 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 17:50:44.652422  213406 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 17:50:44.652437  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHHostname
	I0414 17:50:44.654995  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.655423  213406 main.go:141] libmachine: (embed-certs-418468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:33:03", ip: ""} in network mk-embed-certs-418468: {Iface:virbr2 ExpiryTime:2025-04-14 18:45:30 +0000 UTC Type:0 Mac:52:54:00:2f:33:03 Iaid: IPaddr:192.168.50.199 Prefix:24 Hostname:embed-certs-418468 Clientid:01:52:54:00:2f:33:03}
	I0414 17:50:44.655451  213406 main.go:141] libmachine: (embed-certs-418468) DBG | domain embed-certs-418468 has defined IP address 192.168.50.199 and MAC address 52:54:00:2f:33:03 in network mk-embed-certs-418468
	I0414 17:50:44.655552  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHPort
	I0414 17:50:44.655680  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHKeyPath
	I0414 17:50:44.655776  213406 main.go:141] libmachine: (embed-certs-418468) Calling .GetSSHUsername
	I0414 17:50:44.655847  213406 sshutil.go:53] new ssh client: &{IP:192.168.50.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/embed-certs-418468/id_rsa Username:docker}
	I0414 17:50:44.771042  213406 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 17:50:44.790138  213406 node_ready.go:35] waiting up to 6m0s for node "embed-certs-418468" to be "Ready" ...
	I0414 17:50:44.813392  213406 node_ready.go:49] node "embed-certs-418468" has status "Ready":"True"
	I0414 17:50:44.813417  213406 node_ready.go:38] duration metric: took 23.248396ms for node "embed-certs-418468" to be "Ready" ...
	I0414 17:50:44.813429  213406 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:44.816247  213406 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:44.901629  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 17:50:44.909788  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 17:50:44.915477  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0414 17:50:44.915498  213406 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0414 17:50:44.941111  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 17:50:44.941132  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0414 17:50:44.962200  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0414 17:50:44.962221  213406 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0414 17:50:45.009756  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 17:50:45.009781  213406 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 17:50:45.045994  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0414 17:50:45.046027  213406 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0414 17:50:45.110797  213406 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:50:45.110830  213406 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 17:50:45.174495  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0414 17:50:45.174532  213406 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0414 17:50:45.225055  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 17:50:45.260868  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0414 17:50:45.260897  213406 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0414 17:50:45.286443  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.286475  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.286795  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.286859  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.286873  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.286882  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.286824  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.287121  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.287165  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.319685  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.319702  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.320094  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.320125  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.320125  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.348341  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0414 17:50:45.348362  213406 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0414 17:50:45.425795  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0414 17:50:45.425820  213406 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0414 17:50:45.460510  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0414 17:50:45.460534  213406 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0414 17:50:45.539385  213406 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:50:45.539413  213406 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0414 17:50:45.581338  213406 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0414 17:50:45.899255  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.899281  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.899682  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.899757  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:45.899701  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:45.899772  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:45.899847  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:45.900112  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:45.900124  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.625721  213406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.400621394s)
	I0414 17:50:46.625789  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:46.625805  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:46.626108  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:46.626152  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.626167  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:46.626175  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:46.626444  213406 main.go:141] libmachine: (embed-certs-418468) DBG | Closing plugin on server side
	I0414 17:50:46.626480  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:46.626495  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:46.626506  213406 addons.go:479] Verifying addon metrics-server=true in "embed-certs-418468"
	I0414 17:50:46.825449  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:47.825152  213406 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.24373778s)
	I0414 17:50:47.825202  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:47.825214  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:47.825570  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:47.825589  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:47.825599  213406 main.go:141] libmachine: Making call to close driver server
	I0414 17:50:47.825606  213406 main.go:141] libmachine: (embed-certs-418468) Calling .Close
	I0414 17:50:47.825874  213406 main.go:141] libmachine: Successfully made call to close driver server
	I0414 17:50:47.825893  213406 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 17:50:47.827533  213406 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-418468 addons enable metrics-server
	
	I0414 17:50:47.828991  213406 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0414 17:50:47.830391  213406 addons.go:514] duration metric: took 3.236761674s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0414 17:50:49.325501  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:51.822230  213406 pod_ready.go:103] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"False"
	I0414 17:50:53.821538  213406 pod_ready.go:93] pod "etcd-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.821565  213406 pod_ready.go:82] duration metric: took 9.005299134s for pod "etcd-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.821578  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.825285  213406 pod_ready.go:93] pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.825300  213406 pod_ready.go:82] duration metric: took 3.715551ms for pod "kube-apiserver-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.825308  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.829517  213406 pod_ready.go:93] pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.829531  213406 pod_ready.go:82] duration metric: took 4.218381ms for pod "kube-controller-manager-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.829538  213406 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.835753  213406 pod_ready.go:93] pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace has status "Ready":"True"
	I0414 17:50:53.835766  213406 pod_ready.go:82] duration metric: took 6.223543ms for pod "kube-scheduler-embed-certs-418468" in "kube-system" namespace to be "Ready" ...
	I0414 17:50:53.835772  213406 pod_ready.go:39] duration metric: took 9.022329744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 17:50:53.835786  213406 api_server.go:52] waiting for apiserver process to appear ...
	I0414 17:50:53.835832  213406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:50:53.867607  213406 api_server.go:72] duration metric: took 9.274050694s to wait for apiserver process to appear ...
	I0414 17:50:53.867636  213406 api_server.go:88] waiting for apiserver healthz status ...
	I0414 17:50:53.867656  213406 api_server.go:253] Checking apiserver healthz at https://192.168.50.199:8443/healthz ...
	I0414 17:50:53.871486  213406 api_server.go:279] https://192.168.50.199:8443/healthz returned 200:
	ok
	I0414 17:50:53.872317  213406 api_server.go:141] control plane version: v1.32.2
	I0414 17:50:53.872338  213406 api_server.go:131] duration metric: took 4.691901ms to wait for apiserver health ...
	I0414 17:50:53.872344  213406 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 17:50:53.878405  213406 system_pods.go:59] 9 kube-system pods found
	I0414 17:50:53.878425  213406 system_pods.go:61] "coredns-668d6bf9bc-4vrqt" [0738482b-8c0d-4c89-a82f-3dd26a143603] Running
	I0414 17:50:53.878430  213406 system_pods.go:61] "coredns-668d6bf9bc-kbbbq" [24fdcd3d-22b7-4976-85f2-42754178ac49] Running
	I0414 17:50:53.878434  213406 system_pods.go:61] "etcd-embed-certs-418468" [97963194-6254-4aaf-b879-3c4000c86351] Running
	I0414 17:50:53.878437  213406 system_pods.go:61] "kube-apiserver-embed-certs-418468" [8cdb0b46-19da-4d8e-9bd0-7efaa4ef75e6] Running
	I0414 17:50:53.878441  213406 system_pods.go:61] "kube-controller-manager-embed-certs-418468" [7d26ed2b-d015-4015-b248-ccce9e76a6bb] Running
	I0414 17:50:53.878444  213406 system_pods.go:61] "kube-proxy-zqrnn" [b0b54433-bd5d-4c9b-a547-8558e3d66058] Running
	I0414 17:50:53.878447  213406 system_pods.go:61] "kube-scheduler-embed-certs-418468" [5bd1256a-1d95-4e7d-b52e-0208820937f8] Running
	I0414 17:50:53.878454  213406 system_pods.go:61] "metrics-server-f79f97bbb-8blvp" [39557b8d-be28-48b9-ab37-76c22f46341d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:50:53.878461  213406 system_pods.go:61] "storage-provisioner" [136247c3-315f-43ad-a40d-080ad60a6b45] Running
	I0414 17:50:53.878469  213406 system_pods.go:74] duration metric: took 6.120329ms to wait for pod list to return data ...
	I0414 17:50:53.878478  213406 default_sa.go:34] waiting for default service account to be created ...
	I0414 17:50:53.880531  213406 default_sa.go:45] found service account: "default"
	I0414 17:50:53.880549  213406 default_sa.go:55] duration metric: took 2.064832ms for default service account to be created ...
	I0414 17:50:53.880558  213406 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 17:50:54.020249  213406 system_pods.go:86] 9 kube-system pods found
	I0414 17:50:54.020276  213406 system_pods.go:89] "coredns-668d6bf9bc-4vrqt" [0738482b-8c0d-4c89-a82f-3dd26a143603] Running
	I0414 17:50:54.020282  213406 system_pods.go:89] "coredns-668d6bf9bc-kbbbq" [24fdcd3d-22b7-4976-85f2-42754178ac49] Running
	I0414 17:50:54.020286  213406 system_pods.go:89] "etcd-embed-certs-418468" [97963194-6254-4aaf-b879-3c4000c86351] Running
	I0414 17:50:54.020290  213406 system_pods.go:89] "kube-apiserver-embed-certs-418468" [8cdb0b46-19da-4d8e-9bd0-7efaa4ef75e6] Running
	I0414 17:50:54.020295  213406 system_pods.go:89] "kube-controller-manager-embed-certs-418468" [7d26ed2b-d015-4015-b248-ccce9e76a6bb] Running
	I0414 17:50:54.020298  213406 system_pods.go:89] "kube-proxy-zqrnn" [b0b54433-bd5d-4c9b-a547-8558e3d66058] Running
	I0414 17:50:54.020301  213406 system_pods.go:89] "kube-scheduler-embed-certs-418468" [5bd1256a-1d95-4e7d-b52e-0208820937f8] Running
	I0414 17:50:54.020307  213406 system_pods.go:89] "metrics-server-f79f97bbb-8blvp" [39557b8d-be28-48b9-ab37-76c22f46341d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 17:50:54.020312  213406 system_pods.go:89] "storage-provisioner" [136247c3-315f-43ad-a40d-080ad60a6b45] Running
	I0414 17:50:54.020323  213406 system_pods.go:126] duration metric: took 139.758195ms to wait for k8s-apps to be running ...
	I0414 17:50:54.020333  213406 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 17:50:54.020383  213406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:50:54.042446  213406 system_svc.go:56] duration metric: took 22.104112ms WaitForService to wait for kubelet
	I0414 17:50:54.042479  213406 kubeadm.go:582] duration metric: took 9.448925946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 17:50:54.042499  213406 node_conditions.go:102] verifying NodePressure condition ...
	I0414 17:50:54.219590  213406 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 17:50:54.219612  213406 node_conditions.go:123] node cpu capacity is 2
	I0414 17:50:54.219623  213406 node_conditions.go:105] duration metric: took 177.119005ms to run NodePressure ...
	I0414 17:50:54.219634  213406 start.go:241] waiting for startup goroutines ...
	I0414 17:50:54.219642  213406 start.go:246] waiting for cluster config update ...
	I0414 17:50:54.219655  213406 start.go:255] writing updated cluster config ...
	I0414 17:50:54.219959  213406 ssh_runner.go:195] Run: rm -f paused
	I0414 17:50:54.282458  213406 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 17:50:54.284727  213406 out.go:177] * Done! kubectl is now configured to use "embed-certs-418468" cluster and "default" namespace by default
	I0414 17:50:56.269443  213635 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:50:56.270353  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:50:56.270523  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:01.271007  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:01.271253  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:11.271837  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:11.272049  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:51:31.273087  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:51:31.273315  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:11.275552  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:11.275856  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:11.275878  213635 kubeadm.go:310] 
	I0414 17:52:11.275927  213635 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:52:11.275981  213635 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:52:11.275991  213635 kubeadm.go:310] 
	I0414 17:52:11.276038  213635 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:52:11.276092  213635 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:52:11.276213  213635 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:52:11.276222  213635 kubeadm.go:310] 
	I0414 17:52:11.276375  213635 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:52:11.276431  213635 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:52:11.276482  213635 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:52:11.276502  213635 kubeadm.go:310] 
	I0414 17:52:11.276617  213635 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:52:11.276722  213635 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:52:11.276733  213635 kubeadm.go:310] 
	I0414 17:52:11.276827  213635 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:52:11.276902  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:52:11.276994  213635 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:52:11.277119  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:52:11.277137  213635 kubeadm.go:310] 
	I0414 17:52:11.277720  213635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:52:11.277871  213635 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:52:11.277974  213635 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 17:52:11.278218  213635 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 17:52:11.278258  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 17:52:11.738009  213635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:52:11.752929  213635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 17:52:11.762849  213635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 17:52:11.762865  213635 kubeadm.go:157] found existing configuration files:
	
	I0414 17:52:11.762901  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 17:52:11.772188  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 17:52:11.772240  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 17:52:11.781466  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 17:52:11.790582  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 17:52:11.790624  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 17:52:11.799766  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 17:52:11.808443  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 17:52:11.808481  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 17:52:11.817544  213635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 17:52:11.826418  213635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 17:52:11.826464  213635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 17:52:11.835946  213635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 17:52:11.910031  213635 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 17:52:11.910113  213635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 17:52:12.048882  213635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 17:52:12.049032  213635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 17:52:12.049160  213635 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 17:52:12.216124  213635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 17:52:12.218841  213635 out.go:235]   - Generating certificates and keys ...
	I0414 17:52:12.218938  213635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 17:52:12.219030  213635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 17:52:12.219153  213635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 17:52:12.219244  213635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 17:52:12.219342  213635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 17:52:12.219420  213635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 17:52:12.219507  213635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 17:52:12.219612  213635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 17:52:12.219690  213635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 17:52:12.219802  213635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 17:52:12.219867  213635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 17:52:12.219917  213635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 17:52:12.485118  213635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 17:52:12.699901  213635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 17:52:12.798407  213635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 17:52:12.941803  213635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 17:52:12.964937  213635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 17:52:12.965897  213635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 17:52:12.966059  213635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 17:52:13.109607  213635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 17:52:13.112109  213635 out.go:235]   - Booting up control plane ...
	I0414 17:52:13.112248  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 17:52:13.115664  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 17:52:13.117940  213635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 17:52:13.119128  213635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 17:52:13.123525  213635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 17:52:53.126895  213635 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 17:52:53.127019  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:53.127237  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:52:58.127800  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:52:58.127997  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:53:08.128675  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:53:08.128878  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:53:28.129416  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:53:28.129642  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:54:08.127998  213635 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 17:54:08.128303  213635 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 17:54:08.128326  213635 kubeadm.go:310] 
	I0414 17:54:08.128362  213635 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 17:54:08.128505  213635 kubeadm.go:310] 		timed out waiting for the condition
	I0414 17:54:08.128527  213635 kubeadm.go:310] 
	I0414 17:54:08.128595  213635 kubeadm.go:310] 	This error is likely caused by:
	I0414 17:54:08.128640  213635 kubeadm.go:310] 		- The kubelet is not running
	I0414 17:54:08.128791  213635 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 17:54:08.128814  213635 kubeadm.go:310] 
	I0414 17:54:08.128946  213635 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 17:54:08.128997  213635 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 17:54:08.129043  213635 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 17:54:08.129052  213635 kubeadm.go:310] 
	I0414 17:54:08.129167  213635 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 17:54:08.129296  213635 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 17:54:08.129314  213635 kubeadm.go:310] 
	I0414 17:54:08.129479  213635 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 17:54:08.129615  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 17:54:08.129706  213635 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 17:54:08.129814  213635 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 17:54:08.129824  213635 kubeadm.go:310] 
	I0414 17:54:08.130345  213635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 17:54:08.130443  213635 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 17:54:08.130555  213635 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 17:54:08.130646  213635 kubeadm.go:394] duration metric: took 8m1.792756267s to StartCluster
	I0414 17:54:08.130721  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 17:54:08.130802  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 17:54:08.175207  213635 cri.go:89] found id: ""
	I0414 17:54:08.175243  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.175251  213635 logs.go:284] No container was found matching "kube-apiserver"
	I0414 17:54:08.175257  213635 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 17:54:08.175311  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 17:54:08.209345  213635 cri.go:89] found id: ""
	I0414 17:54:08.209370  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.209377  213635 logs.go:284] No container was found matching "etcd"
	I0414 17:54:08.209382  213635 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 17:54:08.209428  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 17:54:08.244901  213635 cri.go:89] found id: ""
	I0414 17:54:08.244937  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.244946  213635 logs.go:284] No container was found matching "coredns"
	I0414 17:54:08.244952  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 17:54:08.245022  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 17:54:08.279974  213635 cri.go:89] found id: ""
	I0414 17:54:08.279999  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.280006  213635 logs.go:284] No container was found matching "kube-scheduler"
	I0414 17:54:08.280011  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 17:54:08.280065  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 17:54:08.312666  213635 cri.go:89] found id: ""
	I0414 17:54:08.312691  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.312701  213635 logs.go:284] No container was found matching "kube-proxy"
	I0414 17:54:08.312708  213635 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 17:54:08.312761  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 17:54:08.345579  213635 cri.go:89] found id: ""
	I0414 17:54:08.345609  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.345619  213635 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 17:54:08.345627  213635 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 17:54:08.345682  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 17:54:08.377810  213635 cri.go:89] found id: ""
	I0414 17:54:08.377844  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.377853  213635 logs.go:284] No container was found matching "kindnet"
	I0414 17:54:08.377858  213635 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 17:54:08.377900  213635 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 17:54:08.409648  213635 cri.go:89] found id: ""
	I0414 17:54:08.409673  213635 logs.go:282] 0 containers: []
	W0414 17:54:08.409681  213635 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 17:54:08.409697  213635 logs.go:123] Gathering logs for dmesg ...
	I0414 17:54:08.409708  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 17:54:08.422905  213635 logs.go:123] Gathering logs for describe nodes ...
	I0414 17:54:08.422930  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 17:54:08.495193  213635 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 17:54:08.495217  213635 logs.go:123] Gathering logs for CRI-O ...
	I0414 17:54:08.495232  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 17:54:08.603072  213635 logs.go:123] Gathering logs for container status ...
	I0414 17:54:08.603108  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 17:54:08.640028  213635 logs.go:123] Gathering logs for kubelet ...
	I0414 17:54:08.640058  213635 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0414 17:54:08.690480  213635 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 17:54:08.690537  213635 out.go:270] * 
	W0414 17:54:08.690590  213635 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:54:08.690605  213635 out.go:270] * 
	W0414 17:54:08.691392  213635 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 17:54:08.694565  213635 out.go:201] 
	W0414 17:54:08.695675  213635 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 17:54:08.695709  213635 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 17:54:08.695724  213635 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 17:54:08.697684  213635 out.go:201] 
	
	
	==> CRI-O <==
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.866008149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744654154865989090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11c864b1-79bd-43f5-9f3d-cae6f809d316 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.866493516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3749017-d477-4fa0-9f31-3c5597b72d27 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.866563781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3749017-d477-4fa0-9f31-3c5597b72d27 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.866603209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e3749017-d477-4fa0-9f31-3c5597b72d27 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.897175429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90fd380f-a5fa-45f7-a7ea-5bad6a9a0400 name=/runtime.v1.RuntimeService/Version
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.897277710Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90fd380f-a5fa-45f7-a7ea-5bad6a9a0400 name=/runtime.v1.RuntimeService/Version
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.898244612Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c84be69f-1a99-4e6c-9a3e-ebd912e34e17 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.898618965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744654154898595510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c84be69f-1a99-4e6c-9a3e-ebd912e34e17 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.899267194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d5378a0-8aaa-404a-8025-d2a4913db814 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.899335003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d5378a0-8aaa-404a-8025-d2a4913db814 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.899368040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d5378a0-8aaa-404a-8025-d2a4913db814 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.931498933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ff06174-9d34-4adc-a5b4-dc9ebfeeb5be name=/runtime.v1.RuntimeService/Version
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.931564191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ff06174-9d34-4adc-a5b4-dc9ebfeeb5be name=/runtime.v1.RuntimeService/Version
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.932634878Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0343315c-588f-4f27-bb7e-2ee4925f6ab9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.933105523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744654154933083133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0343315c-588f-4f27-bb7e-2ee4925f6ab9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.933833783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1192015d-dce9-4e9b-9c20-3dfac19c212b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.933894947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1192015d-dce9-4e9b-9c20-3dfac19c212b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.933926654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1192015d-dce9-4e9b-9c20-3dfac19c212b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.963070525Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86251b1b-3d59-4e9c-9a94-8a14ff85a8dc name=/runtime.v1.RuntimeService/Version
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.963165224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86251b1b-3d59-4e9c-9a94-8a14ff85a8dc name=/runtime.v1.RuntimeService/Version
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.964494539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a9109a4-6b3d-44ed-a7cb-9e425268e5d0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.964963456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744654154964941473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a9109a4-6b3d-44ed-a7cb-9e425268e5d0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.965371485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca1f0b19-b99c-4444-a51d-5bf2482cd2ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.965441561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca1f0b19-b99c-4444-a51d-5bf2482cd2ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 18:09:14 old-k8s-version-768580 crio[629]: time="2025-04-14 18:09:14.965474056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ca1f0b19-b99c-4444-a51d-5bf2482cd2ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 17:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055960] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049332] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.224319] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.838807] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.420171] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.914151] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.065125] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060469] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.182225] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.143184] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.256654] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[Apr14 17:46] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.073476] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.861304] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[ +14.344832] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 17:50] systemd-fstab-generator[5080]: Ignoring "noauto" option for root device
	[Apr14 17:52] systemd-fstab-generator[5361]: Ignoring "noauto" option for root device
	[  +0.059704] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:09:15 up 23 min,  0 users,  load average: 0.00, 0.02, 0.05
	Linux old-k8s-version-768580 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000359140, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000bdfda0, 0x24, 0x0, ...)
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]: net.(*Dialer).DialContext(0xc0001f50e0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bdfda0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0006f9320, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bdfda0, 0x24, 0x1000000000060, 0x7f41dbf91ce8, 0x118, ...)
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]: net/http.(*Transport).dial(0xc000a96000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bdfda0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]: net/http.(*Transport).dialConn(0xc000a96000, 0x4f7fe00, 0xc000120018, 0x0, 0xc0003cc540, 0x5, 0xc000bdfda0, 0x24, 0x0, 0xc00096e000, ...)
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]: net/http.(*Transport).dialConnFor(0xc000a96000, 0xc000bd54a0)
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]: created by net/http.(*Transport).queueForDial
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7208]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 14 18:09:11 old-k8s-version-768580 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 18:09:11 old-k8s-version-768580 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 18:09:11 old-k8s-version-768580 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 177.
	Apr 14 18:09:11 old-k8s-version-768580 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 18:09:11 old-k8s-version-768580 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7217]: I0414 18:09:11.795770    7217 server.go:416] Version: v1.20.0
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7217]: I0414 18:09:11.796091    7217 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7217]: I0414 18:09:11.797975    7217 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7217]: W0414 18:09:11.798718    7217 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 14 18:09:11 old-k8s-version-768580 kubelet[7217]: I0414 18:09:11.798988    7217 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 2 (222.531887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-768580" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (362.47s)

                                                
                                    

Test pass (271/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.95
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.32.2/json-events 4.34
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.05
18 TestDownloadOnly/v1.32.2/DeleteAll 0.13
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 83.39
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 132.29
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.5
35 TestAddons/parallel/Registry 18.64
37 TestAddons/parallel/InspektorGadget 10.98
38 TestAddons/parallel/MetricsServer 6.31
40 TestAddons/parallel/CSI 50.8
41 TestAddons/parallel/Headlamp 21.15
42 TestAddons/parallel/CloudSpanner 5.56
43 TestAddons/parallel/LocalPath 11.1
44 TestAddons/parallel/NvidiaDevicePlugin 5.59
45 TestAddons/parallel/Yakd 11.8
47 TestAddons/StoppedEnableDisable 91
48 TestCertOptions 71.04
49 TestCertExpiration 313.97
51 TestForceSystemdFlag 106.17
52 TestForceSystemdEnv 45.26
54 TestKVMDriverInstallOrUpdate 1.33
58 TestErrorSpam/setup 42.22
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.69
63 TestErrorSpam/stop 5.42
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 91.86
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.73
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
75 TestFunctional/serial/CacheCmd/cache/add_local 1.08
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 50.64
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.38
86 TestFunctional/serial/LogsFileCmd 1.39
87 TestFunctional/serial/InvalidService 4.37
89 TestFunctional/parallel/ConfigCmd 0.33
90 TestFunctional/parallel/DashboardCmd 28.14
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.05
97 TestFunctional/parallel/ServiceCmdConnect 10.53
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 41.76
101 TestFunctional/parallel/SSHCmd 0.44
102 TestFunctional/parallel/CpCmd 1.46
103 TestFunctional/parallel/MySQL 22.69
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.31
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.83
116 TestFunctional/parallel/ImageCommands/ImageListShort 1.91
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.42
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.67
121 TestFunctional/parallel/ImageCommands/Setup 0.49
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.83
133 TestFunctional/parallel/ProfileCmd/profile_list 0.36
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
135 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.47
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.87
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.78
142 TestFunctional/parallel/MountCmd/any-port 10.09
143 TestFunctional/parallel/ServiceCmd/List 0.32
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
146 TestFunctional/parallel/ServiceCmd/Format 0.37
147 TestFunctional/parallel/ServiceCmd/URL 0.31
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
151 TestFunctional/parallel/MountCmd/specific-port 2.13
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 189.93
161 TestMultiControlPlane/serial/DeployApp 6.82
162 TestMultiControlPlane/serial/PingHostFromPods 1.16
163 TestMultiControlPlane/serial/AddWorkerNode 54.88
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
166 TestMultiControlPlane/serial/CopyFile 12.53
167 TestMultiControlPlane/serial/StopSecondaryNode 91.6
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
169 TestMultiControlPlane/serial/RestartSecondaryNode 50.79
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 437.74
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.15
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
174 TestMultiControlPlane/serial/StopCluster 272.87
175 TestMultiControlPlane/serial/RestartCluster 120.02
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
177 TestMultiControlPlane/serial/AddSecondaryNode 75.18
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
182 TestJSONOutput/start/Command 80.76
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.69
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.6
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.32
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.19
210 TestMainNoArgs 0.04
211 TestMinikubeProfile 84.24
214 TestMountStart/serial/StartWithMountFirst 27.5
215 TestMountStart/serial/VerifyMountFirst 0.37
216 TestMountStart/serial/StartWithMountSecond 26.96
217 TestMountStart/serial/VerifyMountSecond 0.35
218 TestMountStart/serial/DeleteFirst 0.65
219 TestMountStart/serial/VerifyMountPostDelete 0.37
220 TestMountStart/serial/Stop 1.26
221 TestMountStart/serial/RestartStopped 21.4
222 TestMountStart/serial/VerifyMountPostStop 0.37
225 TestMultiNode/serial/FreshStart2Nodes 112.82
226 TestMultiNode/serial/DeployApp2Nodes 5.12
227 TestMultiNode/serial/PingHostFrom2Pods 0.74
228 TestMultiNode/serial/AddNode 45.01
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.56
231 TestMultiNode/serial/CopyFile 6.98
232 TestMultiNode/serial/StopNode 2.31
233 TestMultiNode/serial/StartAfterStop 36.49
234 TestMultiNode/serial/RestartKeepsNodes 338.57
235 TestMultiNode/serial/DeleteNode 2.78
236 TestMultiNode/serial/StopMultiNode 182.02
237 TestMultiNode/serial/RestartMultiNode 114.62
238 TestMultiNode/serial/ValidateNameConflict 43.62
245 TestScheduledStopUnix 114.95
249 TestRunningBinaryUpgrade 225.12
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
255 TestNoKubernetes/serial/StartWithK8s 94.56
256 TestNoKubernetes/serial/StartWithStopK8s 72.62
257 TestNoKubernetes/serial/Start 59.49
262 TestPause/serial/Start 86.25
267 TestNetworkPlugins/group/false 3.21
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
269 TestNoKubernetes/serial/ProfileList 1.04
270 TestNoKubernetes/serial/Stop 1.45
274 TestNoKubernetes/serial/StartNoArgs 44.03
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
283 TestStoppedBinaryUpgrade/Setup 0.48
284 TestStoppedBinaryUpgrade/Upgrade 128.69
286 TestNetworkPlugins/group/auto/Start 56.15
287 TestNetworkPlugins/group/kindnet/Start 72.16
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
289 TestNetworkPlugins/group/calico/Start 100.4
290 TestNetworkPlugins/group/auto/KubeletFlags 0.27
291 TestNetworkPlugins/group/auto/NetCatPod 11.75
292 TestNetworkPlugins/group/auto/DNS 0.16
293 TestNetworkPlugins/group/auto/Localhost 0.13
294 TestNetworkPlugins/group/auto/HairPin 0.13
295 TestNetworkPlugins/group/custom-flannel/Start 70.66
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
298 TestNetworkPlugins/group/kindnet/NetCatPod 13.3
299 TestNetworkPlugins/group/kindnet/DNS 0.16
300 TestNetworkPlugins/group/kindnet/Localhost 0.12
301 TestNetworkPlugins/group/kindnet/HairPin 0.13
302 TestNetworkPlugins/group/enable-default-cni/Start 81.83
303 TestNetworkPlugins/group/calico/ControllerPod 6.01
304 TestNetworkPlugins/group/calico/KubeletFlags 0.22
305 TestNetworkPlugins/group/calico/NetCatPod 10.27
306 TestNetworkPlugins/group/calico/DNS 0.14
307 TestNetworkPlugins/group/calico/Localhost 0.12
308 TestNetworkPlugins/group/calico/HairPin 0.11
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.27
311 TestNetworkPlugins/group/flannel/Start 71.14
312 TestNetworkPlugins/group/custom-flannel/DNS 0.16
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
315 TestNetworkPlugins/group/bridge/Start 81.08
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.34
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
325 TestNetworkPlugins/group/flannel/NetCatPod 11.49
326 TestNetworkPlugins/group/flannel/DNS 0.17
327 TestNetworkPlugins/group/flannel/Localhost 0.13
328 TestNetworkPlugins/group/flannel/HairPin 0.12
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
330 TestNetworkPlugins/group/bridge/NetCatPod 13.31
332 TestStartStop/group/no-preload/serial/FirstStart 106.4
334 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 102.76
335 TestNetworkPlugins/group/bridge/DNS 10.17
336 TestNetworkPlugins/group/bridge/Localhost 0.11
337 TestNetworkPlugins/group/bridge/HairPin 0.12
339 TestStartStop/group/newest-cni/serial/FirstStart 58.6
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
342 TestStartStop/group/newest-cni/serial/Stop 11.34
343 TestStartStop/group/no-preload/serial/DeployApp 9.27
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
345 TestStartStop/group/newest-cni/serial/SecondStart 36.78
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
348 TestStartStop/group/no-preload/serial/Stop 91.06
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
350 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.39
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
354 TestStartStop/group/newest-cni/serial/Pause 3.06
356 TestStartStop/group/embed-certs/serial/FirstStart 80.53
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
358 TestStartStop/group/no-preload/serial/SecondStart 388.4
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 346.82
361 TestStartStop/group/embed-certs/serial/DeployApp 10.3
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
363 TestStartStop/group/embed-certs/serial/Stop 91.18
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
367 TestStartStop/group/embed-certs/serial/SecondStart 336.01
368 TestStartStop/group/old-k8s-version/serial/Stop 3.3
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
371 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
374 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.89
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/no-preload/serial/Pause 2.62
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
382 TestStartStop/group/embed-certs/serial/Pause 2.63
x
+
TestDownloadOnly/v1.20.0/json-events (7.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-383049 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-383049 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.949946147s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 16:31:12.306754  156633 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0414 16:31:12.306886  156633 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-383049
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-383049: exit status 85 (54.955679ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-383049 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC |          |
	|         | -p download-only-383049        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 16:31:04
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 16:31:04.395900  156645 out.go:345] Setting OutFile to fd 1 ...
	I0414 16:31:04.395984  156645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:31:04.395992  156645 out.go:358] Setting ErrFile to fd 2...
	I0414 16:31:04.395996  156645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:31:04.396148  156645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	W0414 16:31:04.396247  156645 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20349-149500/.minikube/config/config.json: open /home/jenkins/minikube-integration/20349-149500/.minikube/config/config.json: no such file or directory
	I0414 16:31:04.396769  156645 out.go:352] Setting JSON to true
	I0414 16:31:04.397600  156645 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4362,"bootTime":1744643902,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 16:31:04.397688  156645 start.go:139] virtualization: kvm guest
	I0414 16:31:04.399635  156645 out.go:97] [download-only-383049] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0414 16:31:04.399751  156645 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 16:31:04.399790  156645 notify.go:220] Checking for updates...
	I0414 16:31:04.400940  156645 out.go:169] MINIKUBE_LOCATION=20349
	I0414 16:31:04.402024  156645 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 16:31:04.403080  156645 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 16:31:04.404135  156645 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 16:31:04.405098  156645 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 16:31:04.406888  156645 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 16:31:04.407070  156645 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 16:31:04.440502  156645 out.go:97] Using the kvm2 driver based on user configuration
	I0414 16:31:04.440534  156645 start.go:297] selected driver: kvm2
	I0414 16:31:04.440545  156645 start.go:901] validating driver "kvm2" against <nil>
	I0414 16:31:04.440853  156645 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 16:31:04.440930  156645 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 16:31:04.455387  156645 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 16:31:04.455437  156645 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 16:31:04.455937  156645 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 16:31:04.456065  156645 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 16:31:04.456094  156645 cni.go:84] Creating CNI manager for ""
	I0414 16:31:04.456141  156645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 16:31:04.456150  156645 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 16:31:04.456194  156645 start.go:340] cluster config:
	{Name:download-only-383049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-383049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 16:31:04.456345  156645 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 16:31:04.457742  156645 out.go:97] Downloading VM boot image ...
	I0414 16:31:04.457773  156645 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 16:31:07.515386  156645 out.go:97] Starting "download-only-383049" primary control-plane node in "download-only-383049" cluster
	I0414 16:31:07.515416  156645 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 16:31:07.543951  156645 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 16:31:07.543983  156645 cache.go:56] Caching tarball of preloaded images
	I0414 16:31:07.544161  156645 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 16:31:07.545801  156645 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0414 16:31:07.545818  156645 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0414 16:31:07.567645  156645 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-383049 host does not exist
	  To start a cluster, run: "minikube start -p download-only-383049"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-383049
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-356094 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-356094 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.337273507s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 16:31:16.946994  156633 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 16:31:16.947039  156633 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-356094
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-356094: exit status 85 (53.406641ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-383049 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC |                     |
	|         | -p download-only-383049        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
	| delete  | -p download-only-383049        | download-only-383049 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
	| start   | -o=json --download-only        | download-only-356094 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC |                     |
	|         | -p download-only-356094        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 16:31:12
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 16:31:12.647114  156850 out.go:345] Setting OutFile to fd 1 ...
	I0414 16:31:12.647364  156850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:31:12.647372  156850 out.go:358] Setting ErrFile to fd 2...
	I0414 16:31:12.647376  156850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:31:12.647585  156850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 16:31:12.648115  156850 out.go:352] Setting JSON to true
	I0414 16:31:12.648956  156850 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4371,"bootTime":1744643902,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 16:31:12.649043  156850 start.go:139] virtualization: kvm guest
	I0414 16:31:12.650692  156850 out.go:97] [download-only-356094] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 16:31:12.650826  156850 notify.go:220] Checking for updates...
	I0414 16:31:12.651960  156850 out.go:169] MINIKUBE_LOCATION=20349
	I0414 16:31:12.653177  156850 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 16:31:12.654281  156850 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 16:31:12.655320  156850 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 16:31:12.656401  156850 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-356094 host does not exist
	  To start a cluster, run: "minikube start -p download-only-356094"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-356094
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 16:31:17.485885  156633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-396839 --alsologtostderr --binary-mirror http://127.0.0.1:35155 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-396839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-396839
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (83.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-883275 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-883275 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.952839079s)
helpers_test.go:175: Cleaning up "offline-crio-883275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-883275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-883275: (1.433539736s)
--- PASS: TestOffline (83.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-411768
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-411768: exit status 85 (50.875801ms)

                                                
                                                
-- stdout --
	* Profile "addons-411768" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-411768"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-411768
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-411768: exit status 85 (51.620725ms)

                                                
                                                
-- stdout --
	* Profile "addons-411768" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-411768"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (132.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-411768 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-411768 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m12.291217304s)
--- PASS: TestAddons/Setup (132.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-411768 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-411768 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-411768 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-411768 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d2799830-4b53-4013-8379-64bfa1b342a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d2799830-4b53-4013-8379-64bfa1b342a4] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004618571s
addons_test.go:633: (dbg) Run:  kubectl --context addons-411768 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-411768 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-411768 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.277622ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0414 16:33:49.754586  156633 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
helpers_test.go:344: "registry-6c88467877-5vmwg" [e9f17d14-6916-4171-aba7-15b3d6dab565] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003325758s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bpsmn" [998c1dc5-a7ac-4e6d-a29f-01c054cb33e9] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003095301s
addons_test.go:331: (dbg) Run:  kubectl --context addons-411768 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-411768 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-411768 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.570980986s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 ip
2025/04/14 16:34:07 [DEBUG] GET http://192.168.39.237:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-m6f8r" [b6c24b3b-941c-417c-8a44-fcb9701b94df] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011705126s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable inspektor-gadget --alsologtostderr -v=1: (5.969609058s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.294152ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0414 16:33:49.754615  156633 kapi.go:107] duration metric: took 7.099825ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-7fbb699795-s4bdh" [fb315cc6-a736-467a-8f3f-7e48a315f789] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003178799s
addons_test.go:402: (dbg) Run:  kubectl --context addons-411768 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable metrics-server --alsologtostderr -v=1: (1.23204245s)
--- PASS: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0414 16:33:49.747529  156633 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.11087ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-411768 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-411768 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6e4cd99d-2109-4f72-9b69-c4e42a9d8cdb] Pending
helpers_test.go:344: "task-pv-pod" [6e4cd99d-2109-4f72-9b69-c4e42a9d8cdb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6e4cd99d-2109-4f72-9b69-c4e42a9d8cdb] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003210725s
addons_test.go:511: (dbg) Run:  kubectl --context addons-411768 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-411768 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-411768 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-411768 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-411768 delete pod task-pv-pod: (1.353714298s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-411768 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-411768 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-411768 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [779860dd-6f16-40ee-a078-aa1f4dd024cb] Pending
helpers_test.go:344: "task-pv-pod-restore" [779860dd-6f16-40ee-a078-aa1f4dd024cb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [779860dd-6f16-40ee-a078-aa1f4dd024cb] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003761648s
addons_test.go:553: (dbg) Run:  kubectl --context addons-411768 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-411768 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-411768 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.754457401s)
--- PASS: TestAddons/parallel/CSI (50.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-411768 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-9ddhj" [6c8b8d9c-48db-4cd7-abe8-44b631e2a0a6] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-9ddhj" [6c8b8d9c-48db-4cd7-abe8-44b631e2a0a6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-9ddhj" [6c8b8d9c-48db-4cd7-abe8-44b631e2a0a6] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003144586s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable headlamp --alsologtostderr -v=1: (6.226535666s)
--- PASS: TestAddons/parallel/Headlamp (21.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-vbn5c" [1585fae8-d827-4996-8f81-6d06a66b84ee] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003807761s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-411768 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-411768 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-411768 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8720be54-6704-49b3-9bc4-ad3084bd43bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8720be54-6704-49b3-9bc4-ad3084bd43bb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8720be54-6704-49b3-9bc4-ad3084bd43bb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.014959024s
addons_test.go:906: (dbg) Run:  kubectl --context addons-411768 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 ssh "cat /opt/local-path-provisioner/pvc-0223bcea-7c20-4f57-890f-2ceeb26fd209_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-411768 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-411768 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fqwqf" [e5a2d34f-7429-47b0-9239-917c6907123c] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004456665s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-s9jgd" [2bb7b721-5064-4870-9dd7-7f22cbab9d28] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004156608s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable yakd --alsologtostderr -v=1: (5.797311725s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-411768
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-411768: (1m30.730923296s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-411768
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-411768
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-411768
--- PASS: TestAddons/StoppedEnableDisable (91.00s)

                                                
                                    
x
+
TestCertOptions (71.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-013440 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-013440 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.799938379s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-013440 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-013440 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-013440 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-013440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-013440
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-013440: (10.796669186s)
--- PASS: TestCertOptions (71.04s)

                                                
                                    
x
+
TestCertExpiration (313.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-560919 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-560919 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m30.219976329s)
E0414 17:32:08.946029  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-560919 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-560919 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.781004268s)
helpers_test.go:175: Cleaning up "cert-expiration-560919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-560919
--- PASS: TestCertExpiration (313.97s)

                                                
                                    
x
+
TestForceSystemdFlag (106.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-038253 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-038253 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m45.173534468s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-038253 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-038253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-038253
--- PASS: TestForceSystemdFlag (106.17s)

                                                
                                    
x
+
TestForceSystemdEnv (45.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-935323 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-935323 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.263790895s)
helpers_test.go:175: Cleaning up "force-systemd-env-935323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-935323
--- PASS: TestForceSystemdEnv (45.26s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.33s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0414 17:32:48.745558  156633 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 17:32:48.745726  156633 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0414 17:32:48.774173  156633 install.go:62] docker-machine-driver-kvm2: exit status 1
W0414 17:32:48.774369  156633 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 17:32:48.774446  156633 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate541032956/001/docker-machine-driver-kvm2
I0414 17:32:48.930971  156633 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate541032956/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005c7008 gz:0xc0005c70b0 tar:0xc0005c7040 tar.bz2:0xc0005c7060 tar.gz:0xc0005c7070 tar.xz:0xc0005c7090 tar.zst:0xc0005c70a0 tbz2:0xc0005c7060 tgz:0xc0005c7070 txz:0xc0005c7090 tzst:0xc0005c70a0 xz:0xc0005c70b8 zip:0xc0005c70c0 zst:0xc0005c70d0] Getters:map[file:0xc001b296e0 http:0xc0008937c0 https:0xc0008938b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0414 17:32:48.931050  156633 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate541032956/001/docker-machine-driver-kvm2
I0414 17:32:49.553301  156633 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 17:32:49.553393  156633 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 17:32:49.581445  156633 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0414 17:32:49.581472  156633 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0414 17:32:49.581550  156633 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 17:32:49.581582  156633 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate541032956/002/docker-machine-driver-kvm2
I0414 17:32:49.604181  156633 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate541032956/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005c7008 gz:0xc0005c70b0 tar:0xc0005c7040 tar.bz2:0xc0005c7060 tar.gz:0xc0005c7070 tar.xz:0xc0005c7090 tar.zst:0xc0005c70a0 tbz2:0xc0005c7060 tgz:0xc0005c7070 txz:0xc0005c7090 tzst:0xc0005c70a0 xz:0xc0005c70b8 zip:0xc0005c70c0 zst:0xc0005c70d0] Getters:map[file:0xc00097be50 http:0xc00172d310 https:0xc00172d360] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0414 17:32:49.604229  156633 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate541032956/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.33s)

                                                
                                    
x
+
TestErrorSpam/setup (42.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-907171 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-907171 --driver=kvm2  --container-runtime=crio
E0414 16:38:31.089901  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:31.100269  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:31.112141  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:31.133392  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:31.174721  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:31.256121  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:31.417629  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:31.739347  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:32.381365  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:33.662959  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:36.225001  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:41.346403  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:38:51.588172  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-907171 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-907171 --driver=kvm2  --container-runtime=crio: (42.21772192s)
--- PASS: TestErrorSpam/setup (42.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (5.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 stop: (2.314221199s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 stop: (1.727188169s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-907171 --log_dir /tmp/nospam-907171 stop: (1.376862466s)
--- PASS: TestErrorSpam/stop (5.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20349-149500/.minikube/files/etc/test/nested/copy/156633/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207815 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0414 16:39:12.070049  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:39:53.033039  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-207815 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m31.857617183s)
--- PASS: TestFunctional/serial/StartWithProxy (91.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 16:40:34.275745  156633 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207815 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-207815 --alsologtostderr -v=8: (29.729058829s)
functional_test.go:680: soft start took 29.729680591s for "functional-207815" cluster.
I0414 16:41:04.005130  156633 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (29.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-207815 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-207815 cache add registry.k8s.io/pause:3.3: (1.162013373s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-207815 cache add registry.k8s.io/pause:latest: (1.041870716s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-207815 /tmp/TestFunctionalserialCacheCmdcacheadd_local637544494/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cache add minikube-local-cache-test:functional-207815
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cache delete minikube-local-cache-test:functional-207815
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-207815
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.71743ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 kubectl -- --context functional-207815 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-207815 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207815 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0414 16:41:14.958291  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-207815 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.634821202s)
functional_test.go:778: restart took 50.634919063s for "functional-207815" cluster.
I0414 16:42:01.282528  156633 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (50.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-207815 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-207815 logs: (1.382233538s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 logs --file /tmp/TestFunctionalserialLogsFileCmd632314349/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-207815 logs --file /tmp/TestFunctionalserialLogsFileCmd632314349/001/logs.txt: (1.386643286s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-207815 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-207815
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-207815: exit status 115 (259.026729ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.239:30964 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-207815 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 config get cpus: exit status 14 (50.067264ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 config get cpus: exit status 14 (52.305423ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-207815 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-207815 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 164537: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207815 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-207815 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.066455ms)

                                                
                                                
-- stdout --
	* [functional-207815] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 16:42:20.733567  164200 out.go:345] Setting OutFile to fd 1 ...
	I0414 16:42:20.733652  164200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:42:20.733659  164200 out.go:358] Setting ErrFile to fd 2...
	I0414 16:42:20.733663  164200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:42:20.733850  164200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 16:42:20.734354  164200 out.go:352] Setting JSON to false
	I0414 16:42:20.735293  164200 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5039,"bootTime":1744643902,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 16:42:20.735384  164200 start.go:139] virtualization: kvm guest
	I0414 16:42:20.737058  164200 out.go:177] * [functional-207815] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 16:42:20.738701  164200 notify.go:220] Checking for updates...
	I0414 16:42:20.738764  164200 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 16:42:20.740100  164200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 16:42:20.741279  164200 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 16:42:20.742534  164200 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 16:42:20.743846  164200 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 16:42:20.745031  164200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 16:42:20.746568  164200 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 16:42:20.747002  164200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:42:20.747062  164200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:42:20.762095  164200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I0414 16:42:20.762555  164200 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:42:20.763033  164200 main.go:141] libmachine: Using API Version  1
	I0414 16:42:20.763056  164200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:42:20.763424  164200 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:42:20.763613  164200 main.go:141] libmachine: (functional-207815) Calling .DriverName
	I0414 16:42:20.763844  164200 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 16:42:20.764114  164200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:42:20.764149  164200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:42:20.779225  164200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I0414 16:42:20.779617  164200 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:42:20.780048  164200 main.go:141] libmachine: Using API Version  1
	I0414 16:42:20.780074  164200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:42:20.780422  164200 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:42:20.780639  164200 main.go:141] libmachine: (functional-207815) Calling .DriverName
	I0414 16:42:20.814234  164200 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 16:42:20.815490  164200 start.go:297] selected driver: kvm2
	I0414 16:42:20.815505  164200 start.go:901] validating driver "kvm2" against &{Name:functional-207815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-207815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 16:42:20.815593  164200 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 16:42:20.817823  164200 out.go:201] 
	W0414 16:42:20.819000  164200 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 16:42:20.820181  164200 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207815 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207815 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-207815 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.699075ms)

                                                
                                                
-- stdout --
	* [functional-207815] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 16:42:17.263277  163745 out.go:345] Setting OutFile to fd 1 ...
	I0414 16:42:17.263396  163745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:42:17.263402  163745 out.go:358] Setting ErrFile to fd 2...
	I0414 16:42:17.263409  163745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:42:17.263788  163745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 16:42:17.264498  163745 out.go:352] Setting JSON to false
	I0414 16:42:17.265694  163745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5035,"bootTime":1744643902,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 16:42:17.265821  163745 start.go:139] virtualization: kvm guest
	I0414 16:42:17.268113  163745 out.go:177] * [functional-207815] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0414 16:42:17.269612  163745 notify.go:220] Checking for updates...
	I0414 16:42:17.269652  163745 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 16:42:17.270996  163745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 16:42:17.272273  163745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 16:42:17.273681  163745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 16:42:17.274922  163745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 16:42:17.276182  163745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 16:42:17.277678  163745 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 16:42:17.278119  163745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:42:17.278179  163745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:42:17.294693  163745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0414 16:42:17.295207  163745 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:42:17.295723  163745 main.go:141] libmachine: Using API Version  1
	I0414 16:42:17.295751  163745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:42:17.296136  163745 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:42:17.296309  163745 main.go:141] libmachine: (functional-207815) Calling .DriverName
	I0414 16:42:17.296522  163745 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 16:42:17.296785  163745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:42:17.296850  163745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:42:17.311532  163745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0414 16:42:17.312043  163745 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:42:17.312524  163745 main.go:141] libmachine: Using API Version  1
	I0414 16:42:17.312553  163745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:42:17.313111  163745 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:42:17.313316  163745 main.go:141] libmachine: (functional-207815) Calling .DriverName
	I0414 16:42:17.346327  163745 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0414 16:42:17.347595  163745 start.go:297] selected driver: kvm2
	I0414 16:42:17.347608  163745 start.go:901] validating driver "kvm2" against &{Name:functional-207815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-207815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 16:42:17.347722  163745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 16:42:17.349811  163745 out.go:201] 
	W0414 16:42:17.351135  163745 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 16:42:17.352320  163745 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-207815 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-207815 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-j8xct" [0d88f102-5be5-487f-ae91-dfb41bfaf5a9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-j8xct" [0d88f102-5be5-487f-ae91-dfb41bfaf5a9] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003303304s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.239:32705
functional_test.go:1692: http://192.168.39.239:32705: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-j8xct

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.239:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.239:32705
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f10508dc-193d-4573-babd-cfef6f00a8ba] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006263825s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-207815 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-207815 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-207815 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-207815 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [69518427-838d-452c-ae18-5349c74d7ff2] Pending
helpers_test.go:344: "sp-pod" [69518427-838d-452c-ae18-5349c74d7ff2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [69518427-838d-452c-ae18-5349c74d7ff2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00451906s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-207815 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-207815 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-207815 delete -f testdata/storage-provisioner/pod.yaml: (1.837709582s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-207815 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14273713-b4af-45ea-8b93-c6124358cccd] Pending
helpers_test.go:344: "sp-pod" [14273713-b4af-45ea-8b93-c6124358cccd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14273713-b4af-45ea-8b93-c6124358cccd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003544079s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-207815 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh -n functional-207815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cp functional-207815:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3380879543/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh -n functional-207815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh -n functional-207815 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-207815 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-5mq2b" [9e152f27-86e2-4a48-ad8b-7e7b04085ddd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-5mq2b" [9e152f27-86e2-4a48-ad8b-7e7b04085ddd] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003590569s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-207815 exec mysql-58ccfd96bb-5mq2b -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-207815 exec mysql-58ccfd96bb-5mq2b -- mysql -ppassword -e "show databases;": exit status 1 (151.107368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 16:42:42.691017  156633 retry.go:31] will retry after 697.102402ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-207815 exec mysql-58ccfd96bb-5mq2b -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-207815 exec mysql-58ccfd96bb-5mq2b -- mysql -ppassword -e "show databases;": exit status 1 (177.708214ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 16:42:43.566389  156633 retry.go:31] will retry after 1.28177141s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-207815 exec mysql-58ccfd96bb-5mq2b -- mysql -ppassword -e "show databases;"
2025/04/14 16:42:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (22.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/156633/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo cat /etc/test/nested/copy/156633/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/156633.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo cat /etc/ssl/certs/156633.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/156633.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo cat /usr/share/ca-certificates/156633.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/1566332.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo cat /etc/ssl/certs/1566332.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/1566332.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo cat /usr/share/ca-certificates/1566332.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-207815 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 ssh "sudo systemctl is-active docker": exit status 1 (239.653331ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 ssh "sudo systemctl is-active containerd": exit status 1 (222.935971ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls --format short --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-linux-amd64 -p functional-207815 image ls --format short --alsologtostderr: (1.911477362s)
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207815 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-207815
localhost/kicbase/echo-server:functional-207815
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207815 image ls --format short --alsologtostderr:
I0414 16:42:33.352027  165246 out.go:345] Setting OutFile to fd 1 ...
I0414 16:42:33.352164  165246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:33.352176  165246 out.go:358] Setting ErrFile to fd 2...
I0414 16:42:33.352183  165246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:33.352411  165246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
I0414 16:42:33.352968  165246 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:33.353104  165246 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:33.353540  165246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:33.353621  165246 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:33.369355  165246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
I0414 16:42:33.369922  165246 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:33.370483  165246 main.go:141] libmachine: Using API Version  1
I0414 16:42:33.370514  165246 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:33.370853  165246 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:33.371091  165246 main.go:141] libmachine: (functional-207815) Calling .GetState
I0414 16:42:33.373335  165246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:33.373385  165246 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:33.388698  165246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
I0414 16:42:33.389157  165246 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:33.389638  165246 main.go:141] libmachine: Using API Version  1
I0414 16:42:33.389665  165246 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:33.390011  165246 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:33.390177  165246 main.go:141] libmachine: (functional-207815) Calling .DriverName
I0414 16:42:33.390398  165246 ssh_runner.go:195] Run: systemctl --version
I0414 16:42:33.390420  165246 main.go:141] libmachine: (functional-207815) Calling .GetSSHHostname
I0414 16:42:33.393282  165246 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:33.393703  165246 main.go:141] libmachine: (functional-207815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e8:6c", ip: ""} in network mk-functional-207815: {Iface:virbr1 ExpiryTime:2025-04-14 17:39:17 +0000 UTC Type:0 Mac:52:54:00:3b:e8:6c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-207815 Clientid:01:52:54:00:3b:e8:6c}
I0414 16:42:33.393735  165246 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined IP address 192.168.39.239 and MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:33.393924  165246 main.go:141] libmachine: (functional-207815) Calling .GetSSHPort
I0414 16:42:33.394065  165246 main.go:141] libmachine: (functional-207815) Calling .GetSSHKeyPath
I0414 16:42:33.394256  165246 main.go:141] libmachine: (functional-207815) Calling .GetSSHUsername
I0414 16:42:33.394415  165246 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/functional-207815/id_rsa Username:docker}
I0414 16:42:33.502556  165246 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 16:42:35.214906  165246 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.712313837s)
I0414 16:42:35.215218  165246 main.go:141] libmachine: Making call to close driver server
I0414 16:42:35.215237  165246 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:35.215518  165246 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:35.215568  165246 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:42:35.215577  165246 main.go:141] libmachine: Making call to close driver server
I0414 16:42:35.215584  165246 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:35.215647  165246 main.go:141] libmachine: (functional-207815) DBG | Closing plugin on server side
I0414 16:42:35.215898  165246 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:35.215909  165246 main.go:141] libmachine: (functional-207815) DBG | Closing plugin on server side
I0414 16:42:35.215929  165246 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207815 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-207815  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-207815  | 181d44c07fcc1 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| localhost/minikube-local-cache-test     | functional-207815  | 8f6681b7ef508 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 4cad75abc83d5 | 196MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207815 image ls --format table --alsologtostderr:
I0414 16:42:39.700637  165411 out.go:345] Setting OutFile to fd 1 ...
I0414 16:42:39.700748  165411 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:39.700758  165411 out.go:358] Setting ErrFile to fd 2...
I0414 16:42:39.700764  165411 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:39.701060  165411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
I0414 16:42:39.701923  165411 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:39.702077  165411 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:39.702640  165411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:39.702700  165411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:39.717857  165411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
I0414 16:42:39.718403  165411 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:39.718995  165411 main.go:141] libmachine: Using API Version  1
I0414 16:42:39.719019  165411 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:39.719360  165411 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:39.719534  165411 main.go:141] libmachine: (functional-207815) Calling .GetState
I0414 16:42:39.721487  165411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:39.721536  165411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:39.736035  165411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
I0414 16:42:39.736504  165411 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:39.736907  165411 main.go:141] libmachine: Using API Version  1
I0414 16:42:39.736950  165411 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:39.737270  165411 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:39.737464  165411 main.go:141] libmachine: (functional-207815) Calling .DriverName
I0414 16:42:39.737683  165411 ssh_runner.go:195] Run: systemctl --version
I0414 16:42:39.737715  165411 main.go:141] libmachine: (functional-207815) Calling .GetSSHHostname
I0414 16:42:39.740580  165411 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:39.741027  165411 main.go:141] libmachine: (functional-207815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e8:6c", ip: ""} in network mk-functional-207815: {Iface:virbr1 ExpiryTime:2025-04-14 17:39:17 +0000 UTC Type:0 Mac:52:54:00:3b:e8:6c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-207815 Clientid:01:52:54:00:3b:e8:6c}
I0414 16:42:39.741071  165411 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined IP address 192.168.39.239 and MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:39.741144  165411 main.go:141] libmachine: (functional-207815) Calling .GetSSHPort
I0414 16:42:39.741298  165411 main.go:141] libmachine: (functional-207815) Calling .GetSSHKeyPath
I0414 16:42:39.741421  165411 main.go:141] libmachine: (functional-207815) Calling .GetSSHUsername
I0414 16:42:39.741545  165411 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/functional-207815/id_rsa Username:docker}
I0414 16:42:39.853658  165411 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 16:42:39.925849  165411 main.go:141] libmachine: Making call to close driver server
I0414 16:42:39.925869  165411 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:39.926169  165411 main.go:141] libmachine: (functional-207815) DBG | Closing plugin on server side
I0414 16:42:39.926229  165411 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:39.926246  165411 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:42:39.926256  165411 main.go:141] libmachine: Making call to close driver server
I0414 16:42:39.926266  165411 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:39.926533  165411 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:39.926565  165411 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:42:39.926534  165411 main.go:141] libmachine: (functional-207815) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207815 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-207815"],"size":"4943877"},{"id":"181d44c07fcc10471739e878c91685673e743ce845043667d22f85808eaf1896","repoDigests":["localhost/my-image@sha256:519b13bd54be941c3dfe14350c6774f939640a05b1da0ca4501406c855f6e6ac"],"repoTags":["localhost/my-image:functional-207815"],"size":"1468599"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43d
a4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9a7c0474d6c9f7931fcfd9af8fae38a5ed909cc2e79f54f577796be75acdadb2","repoDigests":["docker.io/library/18c63e5e53674335367eebc915062c2666af65fb1b77d2459c132b9b45e63f3c-tmp@sha256:5c4bb9c969e4926133e51fefb0186a1f88aeff853b7b00ff12dd961d7fbebc5e"],"repoTags":[],"size":"1466018"},{"id":"4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485","repoDigests":["docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab","docker.io/library/nginx@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca"],"repoTags":["docker.io/library/nginx:latest"],"size":"196210580"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"b6a454c5a800
d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"8f6681b7ef5083565dbba3660e1573f66e5a08e42d15c9c44b3c1c8b4ac6bbad","repoDigests":["localhost/minikube-local-cache-test@sha256:0915250d153e6cb2df7fd46a3ede751bf9bb098c4050a7e38da4e020cc1c3210"],"repoTags":["localhost/minikube-local-cache-test:functional-207815"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":
"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d
0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"873ed75
102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[
],"size":"43824855"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207815 image ls --format json --alsologtostderr:
I0414 16:42:39.273430  165387 out.go:345] Setting OutFile to fd 1 ...
I0414 16:42:39.273552  165387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:39.273562  165387 out.go:358] Setting ErrFile to fd 2...
I0414 16:42:39.273566  165387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:39.273752  165387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
I0414 16:42:39.274400  165387 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:39.274511  165387 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:39.274934  165387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:39.274994  165387 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:39.290069  165387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
I0414 16:42:39.290567  165387 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:39.291096  165387 main.go:141] libmachine: Using API Version  1
I0414 16:42:39.291116  165387 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:39.291520  165387 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:39.291730  165387 main.go:141] libmachine: (functional-207815) Calling .GetState
I0414 16:42:39.293393  165387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:39.293438  165387 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:39.308173  165387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
I0414 16:42:39.308602  165387 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:39.309181  165387 main.go:141] libmachine: Using API Version  1
I0414 16:42:39.309213  165387 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:39.309546  165387 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:39.309733  165387 main.go:141] libmachine: (functional-207815) Calling .DriverName
I0414 16:42:39.309958  165387 ssh_runner.go:195] Run: systemctl --version
I0414 16:42:39.309993  165387 main.go:141] libmachine: (functional-207815) Calling .GetSSHHostname
I0414 16:42:39.312689  165387 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:39.313093  165387 main.go:141] libmachine: (functional-207815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e8:6c", ip: ""} in network mk-functional-207815: {Iface:virbr1 ExpiryTime:2025-04-14 17:39:17 +0000 UTC Type:0 Mac:52:54:00:3b:e8:6c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-207815 Clientid:01:52:54:00:3b:e8:6c}
I0414 16:42:39.313127  165387 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined IP address 192.168.39.239 and MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:39.313207  165387 main.go:141] libmachine: (functional-207815) Calling .GetSSHPort
I0414 16:42:39.313366  165387 main.go:141] libmachine: (functional-207815) Calling .GetSSHKeyPath
I0414 16:42:39.313496  165387 main.go:141] libmachine: (functional-207815) Calling .GetSSHUsername
I0414 16:42:39.313637  165387 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/functional-207815/id_rsa Username:docker}
I0414 16:42:39.451179  165387 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 16:42:39.525497  165387 main.go:141] libmachine: Making call to close driver server
I0414 16:42:39.525521  165387 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:39.525808  165387 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:39.525861  165387 main.go:141] libmachine: (functional-207815) DBG | Closing plugin on server side
I0414 16:42:39.525888  165387 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:42:39.525901  165387 main.go:141] libmachine: Making call to close driver server
I0414 16:42:39.525908  165387 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:39.526195  165387 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:39.526231  165387 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:42:39.526286  165387 main.go:141] libmachine: (functional-207815) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207815 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485
repoDigests:
- docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab
- docker.io/library/nginx@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca
repoTags:
- docker.io/library/nginx:latest
size: "196210580"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-207815
size: "4943877"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: 8f6681b7ef5083565dbba3660e1573f66e5a08e42d15c9c44b3c1c8b4ac6bbad
repoDigests:
- localhost/minikube-local-cache-test@sha256:0915250d153e6cb2df7fd46a3ede751bf9bb098c4050a7e38da4e020cc1c3210
repoTags:
- localhost/minikube-local-cache-test:functional-207815
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207815 image ls --format yaml --alsologtostderr:
I0414 16:42:35.265358  165269 out.go:345] Setting OutFile to fd 1 ...
I0414 16:42:35.265479  165269 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:35.265490  165269 out.go:358] Setting ErrFile to fd 2...
I0414 16:42:35.265495  165269 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:35.265762  165269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
I0414 16:42:35.266611  165269 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:35.266768  165269 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:35.267324  165269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:35.267393  165269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:35.283711  165269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42279
I0414 16:42:35.284217  165269 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:35.284816  165269 main.go:141] libmachine: Using API Version  1
I0414 16:42:35.284851  165269 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:35.285347  165269 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:35.285608  165269 main.go:141] libmachine: (functional-207815) Calling .GetState
I0414 16:42:35.287801  165269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:35.287858  165269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:35.304688  165269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45849
I0414 16:42:35.305150  165269 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:35.305699  165269 main.go:141] libmachine: Using API Version  1
I0414 16:42:35.305732  165269 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:35.306114  165269 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:35.306334  165269 main.go:141] libmachine: (functional-207815) Calling .DriverName
I0414 16:42:35.306539  165269 ssh_runner.go:195] Run: systemctl --version
I0414 16:42:35.306569  165269 main.go:141] libmachine: (functional-207815) Calling .GetSSHHostname
I0414 16:42:35.309426  165269 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:35.309888  165269 main.go:141] libmachine: (functional-207815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e8:6c", ip: ""} in network mk-functional-207815: {Iface:virbr1 ExpiryTime:2025-04-14 17:39:17 +0000 UTC Type:0 Mac:52:54:00:3b:e8:6c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-207815 Clientid:01:52:54:00:3b:e8:6c}
I0414 16:42:35.309921  165269 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined IP address 192.168.39.239 and MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:35.310054  165269 main.go:141] libmachine: (functional-207815) Calling .GetSSHPort
I0414 16:42:35.310224  165269 main.go:141] libmachine: (functional-207815) Calling .GetSSHKeyPath
I0414 16:42:35.310392  165269 main.go:141] libmachine: (functional-207815) Calling .GetSSHUsername
I0414 16:42:35.310523  165269 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/functional-207815/id_rsa Username:docker}
I0414 16:42:35.440627  165269 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 16:42:35.552553  165269 main.go:141] libmachine: Making call to close driver server
I0414 16:42:35.552569  165269 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:35.552885  165269 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:35.552906  165269 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:42:35.552930  165269 main.go:141] libmachine: Making call to close driver server
I0414 16:42:35.552932  165269 main.go:141] libmachine: (functional-207815) DBG | Closing plugin on server side
I0414 16:42:35.552941  165269 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:35.553174  165269 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:35.553193  165269 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 ssh pgrep buildkitd: exit status 1 (197.111152ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image build -t localhost/my-image:functional-207815 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-207815 image build -t localhost/my-image:functional-207815 testdata/build --alsologtostderr: (3.182142102s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207815 image build -t localhost/my-image:functional-207815 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9a7c0474d6c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-207815
--> 181d44c07fc
Successfully tagged localhost/my-image:functional-207815
181d44c07fcc10471739e878c91685673e743ce845043667d22f85808eaf1896
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207815 image build -t localhost/my-image:functional-207815 testdata/build --alsologtostderr:
I0414 16:42:35.808193  165324 out.go:345] Setting OutFile to fd 1 ...
I0414 16:42:35.808401  165324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:35.808418  165324 out.go:358] Setting ErrFile to fd 2...
I0414 16:42:35.808426  165324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:42:35.808693  165324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
I0414 16:42:35.809565  165324 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:35.810160  165324 config.go:182] Loaded profile config "functional-207815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:42:35.810497  165324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:35.810533  165324 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:35.825749  165324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40499
I0414 16:42:35.826301  165324 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:35.826896  165324 main.go:141] libmachine: Using API Version  1
I0414 16:42:35.826924  165324 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:35.827245  165324 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:35.827444  165324 main.go:141] libmachine: (functional-207815) Calling .GetState
I0414 16:42:35.829435  165324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:42:35.829483  165324 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:42:35.844243  165324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
I0414 16:42:35.844626  165324 main.go:141] libmachine: () Calling .GetVersion
I0414 16:42:35.845022  165324 main.go:141] libmachine: Using API Version  1
I0414 16:42:35.845043  165324 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:42:35.845381  165324 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:42:35.845542  165324 main.go:141] libmachine: (functional-207815) Calling .DriverName
I0414 16:42:35.845757  165324 ssh_runner.go:195] Run: systemctl --version
I0414 16:42:35.845795  165324 main.go:141] libmachine: (functional-207815) Calling .GetSSHHostname
I0414 16:42:35.848631  165324 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:35.849058  165324 main.go:141] libmachine: (functional-207815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e8:6c", ip: ""} in network mk-functional-207815: {Iface:virbr1 ExpiryTime:2025-04-14 17:39:17 +0000 UTC Type:0 Mac:52:54:00:3b:e8:6c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-207815 Clientid:01:52:54:00:3b:e8:6c}
I0414 16:42:35.849087  165324 main.go:141] libmachine: (functional-207815) DBG | domain functional-207815 has defined IP address 192.168.39.239 and MAC address 52:54:00:3b:e8:6c in network mk-functional-207815
I0414 16:42:35.849254  165324 main.go:141] libmachine: (functional-207815) Calling .GetSSHPort
I0414 16:42:35.849425  165324 main.go:141] libmachine: (functional-207815) Calling .GetSSHKeyPath
I0414 16:42:35.849569  165324 main.go:141] libmachine: (functional-207815) Calling .GetSSHUsername
I0414 16:42:35.849714  165324 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/functional-207815/id_rsa Username:docker}
I0414 16:42:35.960514  165324 build_images.go:161] Building image from path: /tmp/build.1119027187.tar
I0414 16:42:35.960588  165324 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 16:42:35.984263  165324 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1119027187.tar
I0414 16:42:35.996420  165324 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1119027187.tar: stat -c "%s %y" /var/lib/minikube/build/build.1119027187.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1119027187.tar': No such file or directory
I0414 16:42:35.996465  165324 ssh_runner.go:362] scp /tmp/build.1119027187.tar --> /var/lib/minikube/build/build.1119027187.tar (3072 bytes)
I0414 16:42:36.053092  165324 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1119027187
I0414 16:42:36.082357  165324 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1119027187 -xf /var/lib/minikube/build/build.1119027187.tar
I0414 16:42:36.115669  165324 crio.go:315] Building image: /var/lib/minikube/build/build.1119027187
I0414 16:42:36.115751  165324 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-207815 /var/lib/minikube/build/build.1119027187 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0414 16:42:38.902849  165324 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-207815 /var/lib/minikube/build/build.1119027187 --cgroup-manager=cgroupfs: (2.787065809s)
I0414 16:42:38.902932  165324 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1119027187
I0414 16:42:38.916344  165324 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1119027187.tar
I0414 16:42:38.931526  165324 build_images.go:217] Built localhost/my-image:functional-207815 from /tmp/build.1119027187.tar
I0414 16:42:38.931561  165324 build_images.go:133] succeeded building to: functional-207815
I0414 16:42:38.931568  165324 build_images.go:134] failed building to: 
I0414 16:42:38.931590  165324 main.go:141] libmachine: Making call to close driver server
I0414 16:42:38.931600  165324 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:38.931892  165324 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:38.931908  165324 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:42:38.931918  165324 main.go:141] libmachine: Making call to close driver server
I0414 16:42:38.931926  165324 main.go:141] libmachine: (functional-207815) Calling .Close
I0414 16:42:38.932137  165324 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:42:38.932150  165324 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-207815
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image load --daemon kicbase/echo-server:functional-207815 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-207815 image load --daemon kicbase/echo-server:functional-207815 --alsologtostderr: (1.51734642s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "312.42732ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "49.39646ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "333.103074ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "44.912246ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-207815 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-207815 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-pzvs5" [6bcfd8c3-932b-430d-a02e-38edb0a37cc6] Pending
helpers_test.go:344: "hello-node-fcfd88b6f-pzvs5" [6bcfd8c3-932b-430d-a02e-38edb0a37cc6] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.007745219s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image load --daemon kicbase/echo-server:functional-207815 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-amd64 -p functional-207815 image load --daemon kicbase/echo-server:functional-207815 --alsologtostderr: (2.20914053s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-207815
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image load --daemon kicbase/echo-server:functional-207815 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image save kicbase/echo-server:functional-207815 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image rm kicbase/echo-server:functional-207815 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-207815
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 image save --daemon kicbase/echo-server:functional-207815 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-207815
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdany-port1872392935/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744648937357956047" to /tmp/TestFunctionalparallelMountCmdany-port1872392935/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744648937357956047" to /tmp/TestFunctionalparallelMountCmdany-port1872392935/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744648937357956047" to /tmp/TestFunctionalparallelMountCmdany-port1872392935/001/test-1744648937357956047
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.969823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 16:42:17.598288  156633 retry.go:31] will retry after 749.445976ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 16:42 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 16:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 16:42 test-1744648937357956047
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh cat /mount-9p/test-1744648937357956047
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-207815 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4e7330ce-2414-48df-ae92-69bd7e28db14] Pending
helpers_test.go:344: "busybox-mount" [4e7330ce-2414-48df-ae92-69bd7e28db14] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4e7330ce-2414-48df-ae92-69bd7e28db14] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4e7330ce-2414-48df-ae92-69bd7e28db14] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.002569522s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-207815 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdany-port1872392935/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 service list -o json
functional_test.go:1511: Took "344.838845ms" to run "out/minikube-linux-amd64 -p functional-207815 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.239:31290
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.239:31290
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdspecific-port396967539/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.637976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 16:42:27.693655  156633 retry.go:31] will retry after 589.880779ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdspecific-port396967539/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 ssh "sudo umount -f /mount-9p": exit status 1 (222.442116ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-207815 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdspecific-port396967539/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1761014216/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1761014216/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1761014216/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T" /mount1: exit status 1 (290.699985ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 16:42:29.866652  156633 retry.go:31] will retry after 340.931127ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207815 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-207815 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1761014216/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1761014216/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1761014216/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-207815
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-207815
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-207815
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (189.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-304734 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 16:43:31.090637  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:43:58.801349  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-304734 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m9.248472505s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (189.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-304734 -- rollout status deployment/busybox: (4.76397054s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-4s8lg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-5dm9v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-wfsf7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-4s8lg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-5dm9v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-wfsf7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-4s8lg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-5dm9v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-wfsf7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-4s8lg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-4s8lg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-5dm9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-5dm9v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-wfsf7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-304734 -- exec busybox-58667487b6-wfsf7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-304734 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-304734 -v=7 --alsologtostderr: (54.038025378s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-304734 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp testdata/cp-test.txt ha-304734:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile613597114/001/cp-test_ha-304734.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734:/home/docker/cp-test.txt ha-304734-m02:/home/docker/cp-test_ha-304734_ha-304734-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m02 "sudo cat /home/docker/cp-test_ha-304734_ha-304734-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734:/home/docker/cp-test.txt ha-304734-m03:/home/docker/cp-test_ha-304734_ha-304734-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m03 "sudo cat /home/docker/cp-test_ha-304734_ha-304734-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734:/home/docker/cp-test.txt ha-304734-m04:/home/docker/cp-test_ha-304734_ha-304734-m04.txt
E0414 16:47:08.946702  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:47:08.953126  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:47:08.964478  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734 "sudo cat /home/docker/cp-test.txt"
E0414 16:47:08.986389  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:47:09.027808  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:47:09.109286  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m04 "sudo cat /home/docker/cp-test_ha-304734_ha-304734-m04.txt"
E0414 16:47:09.271532  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp testdata/cp-test.txt ha-304734-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m02 "sudo cat /home/docker/cp-test.txt"
E0414 16:47:09.593452  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile613597114/001/cp-test_ha-304734-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m02:/home/docker/cp-test.txt ha-304734:/home/docker/cp-test_ha-304734-m02_ha-304734.txt
E0414 16:47:10.235426  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734 "sudo cat /home/docker/cp-test_ha-304734-m02_ha-304734.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m02:/home/docker/cp-test.txt ha-304734-m03:/home/docker/cp-test_ha-304734-m02_ha-304734-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m03 "sudo cat /home/docker/cp-test_ha-304734-m02_ha-304734-m03.txt"
E0414 16:47:11.517742  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m02:/home/docker/cp-test.txt ha-304734-m04:/home/docker/cp-test_ha-304734-m02_ha-304734-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m04 "sudo cat /home/docker/cp-test_ha-304734-m02_ha-304734-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp testdata/cp-test.txt ha-304734-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile613597114/001/cp-test_ha-304734-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m03:/home/docker/cp-test.txt ha-304734:/home/docker/cp-test_ha-304734-m03_ha-304734.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734 "sudo cat /home/docker/cp-test_ha-304734-m03_ha-304734.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m03:/home/docker/cp-test.txt ha-304734-m02:/home/docker/cp-test_ha-304734-m03_ha-304734-m02.txt
E0414 16:47:14.079499  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m02 "sudo cat /home/docker/cp-test_ha-304734-m03_ha-304734-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m03:/home/docker/cp-test.txt ha-304734-m04:/home/docker/cp-test_ha-304734-m03_ha-304734-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m04 "sudo cat /home/docker/cp-test_ha-304734-m03_ha-304734-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp testdata/cp-test.txt ha-304734-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile613597114/001/cp-test_ha-304734-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m04:/home/docker/cp-test.txt ha-304734:/home/docker/cp-test_ha-304734-m04_ha-304734.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734 "sudo cat /home/docker/cp-test_ha-304734-m04_ha-304734.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m04:/home/docker/cp-test.txt ha-304734-m02:/home/docker/cp-test_ha-304734-m04_ha-304734-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m02 "sudo cat /home/docker/cp-test_ha-304734-m04_ha-304734-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 cp ha-304734-m04:/home/docker/cp-test.txt ha-304734-m03:/home/docker/cp-test_ha-304734-m04_ha-304734-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 ssh -n ha-304734-m03 "sudo cat /home/docker/cp-test_ha-304734-m04_ha-304734-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 node stop m02 -v=7 --alsologtostderr
E0414 16:47:19.201510  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:47:29.443673  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:47:49.925281  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:48:30.887172  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:48:31.089893  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-304734 node stop m02 -v=7 --alsologtostderr: (1m30.978767029s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr: exit status 7 (624.010472ms)

                                                
                                                
-- stdout --
	ha-304734
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-304734-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-304734-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-304734-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 16:48:49.176579  170136 out.go:345] Setting OutFile to fd 1 ...
	I0414 16:48:49.176690  170136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:48:49.176699  170136 out.go:358] Setting ErrFile to fd 2...
	I0414 16:48:49.176703  170136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 16:48:49.176881  170136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 16:48:49.177022  170136 out.go:352] Setting JSON to false
	I0414 16:48:49.177050  170136 mustload.go:65] Loading cluster: ha-304734
	I0414 16:48:49.177149  170136 notify.go:220] Checking for updates...
	I0414 16:48:49.177433  170136 config.go:182] Loaded profile config "ha-304734": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 16:48:49.177454  170136 status.go:174] checking status of ha-304734 ...
	I0414 16:48:49.177807  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.177884  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.193023  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44749
	I0414 16:48:49.193412  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.193954  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.193980  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.194299  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.194465  170136 main.go:141] libmachine: (ha-304734) Calling .GetState
	I0414 16:48:49.196377  170136 status.go:371] ha-304734 host status = "Running" (err=<nil>)
	I0414 16:48:49.196393  170136 host.go:66] Checking if "ha-304734" exists ...
	I0414 16:48:49.196747  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.196788  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.211839  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37011
	I0414 16:48:49.212199  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.212629  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.212647  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.212950  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.213106  170136 main.go:141] libmachine: (ha-304734) Calling .GetIP
	I0414 16:48:49.215622  170136 main.go:141] libmachine: (ha-304734) DBG | domain ha-304734 has defined MAC address 52:54:00:97:b7:53 in network mk-ha-304734
	I0414 16:48:49.216055  170136 main.go:141] libmachine: (ha-304734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b7:53", ip: ""} in network mk-ha-304734: {Iface:virbr1 ExpiryTime:2025-04-14 17:43:06 +0000 UTC Type:0 Mac:52:54:00:97:b7:53 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-304734 Clientid:01:52:54:00:97:b7:53}
	I0414 16:48:49.216082  170136 main.go:141] libmachine: (ha-304734) DBG | domain ha-304734 has defined IP address 192.168.39.123 and MAC address 52:54:00:97:b7:53 in network mk-ha-304734
	I0414 16:48:49.216283  170136 host.go:66] Checking if "ha-304734" exists ...
	I0414 16:48:49.216628  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.216669  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.232969  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43325
	I0414 16:48:49.233340  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.233774  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.233795  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.234122  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.234275  170136 main.go:141] libmachine: (ha-304734) Calling .DriverName
	I0414 16:48:49.234413  170136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 16:48:49.234433  170136 main.go:141] libmachine: (ha-304734) Calling .GetSSHHostname
	I0414 16:48:49.237265  170136 main.go:141] libmachine: (ha-304734) DBG | domain ha-304734 has defined MAC address 52:54:00:97:b7:53 in network mk-ha-304734
	I0414 16:48:49.237670  170136 main.go:141] libmachine: (ha-304734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b7:53", ip: ""} in network mk-ha-304734: {Iface:virbr1 ExpiryTime:2025-04-14 17:43:06 +0000 UTC Type:0 Mac:52:54:00:97:b7:53 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:ha-304734 Clientid:01:52:54:00:97:b7:53}
	I0414 16:48:49.237706  170136 main.go:141] libmachine: (ha-304734) DBG | domain ha-304734 has defined IP address 192.168.39.123 and MAC address 52:54:00:97:b7:53 in network mk-ha-304734
	I0414 16:48:49.237885  170136 main.go:141] libmachine: (ha-304734) Calling .GetSSHPort
	I0414 16:48:49.238043  170136 main.go:141] libmachine: (ha-304734) Calling .GetSSHKeyPath
	I0414 16:48:49.238179  170136 main.go:141] libmachine: (ha-304734) Calling .GetSSHUsername
	I0414 16:48:49.238308  170136 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/ha-304734/id_rsa Username:docker}
	I0414 16:48:49.327337  170136 ssh_runner.go:195] Run: systemctl --version
	I0414 16:48:49.334919  170136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 16:48:49.350949  170136 kubeconfig.go:125] found "ha-304734" server: "https://192.168.39.254:8443"
	I0414 16:48:49.350982  170136 api_server.go:166] Checking apiserver status ...
	I0414 16:48:49.351021  170136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 16:48:49.367859  170136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W0414 16:48:49.377934  170136 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 16:48:49.377973  170136 ssh_runner.go:195] Run: ls
	I0414 16:48:49.382097  170136 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 16:48:49.387790  170136 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 16:48:49.387814  170136 status.go:463] ha-304734 apiserver status = Running (err=<nil>)
	I0414 16:48:49.387824  170136 status.go:176] ha-304734 status: &{Name:ha-304734 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 16:48:49.387856  170136 status.go:174] checking status of ha-304734-m02 ...
	I0414 16:48:49.388172  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.388210  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.403625  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33437
	I0414 16:48:49.404008  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.404404  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.404423  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.404747  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.404899  170136 main.go:141] libmachine: (ha-304734-m02) Calling .GetState
	I0414 16:48:49.406403  170136 status.go:371] ha-304734-m02 host status = "Stopped" (err=<nil>)
	I0414 16:48:49.406418  170136 status.go:384] host is not running, skipping remaining checks
	I0414 16:48:49.406424  170136 status.go:176] ha-304734-m02 status: &{Name:ha-304734-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 16:48:49.406441  170136 status.go:174] checking status of ha-304734-m03 ...
	I0414 16:48:49.406800  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.406853  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.422293  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38455
	I0414 16:48:49.422626  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.423003  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.423025  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.423311  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.423475  170136 main.go:141] libmachine: (ha-304734-m03) Calling .GetState
	I0414 16:48:49.424707  170136 status.go:371] ha-304734-m03 host status = "Running" (err=<nil>)
	I0414 16:48:49.424724  170136 host.go:66] Checking if "ha-304734-m03" exists ...
	I0414 16:48:49.424997  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.425029  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.438783  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0414 16:48:49.439130  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.439479  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.439500  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.439771  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.439953  170136 main.go:141] libmachine: (ha-304734-m03) Calling .GetIP
	I0414 16:48:49.442314  170136 main.go:141] libmachine: (ha-304734-m03) DBG | domain ha-304734-m03 has defined MAC address 52:54:00:67:33:5d in network mk-ha-304734
	I0414 16:48:49.442733  170136 main.go:141] libmachine: (ha-304734-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:33:5d", ip: ""} in network mk-ha-304734: {Iface:virbr1 ExpiryTime:2025-04-14 17:45:03 +0000 UTC Type:0 Mac:52:54:00:67:33:5d Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-304734-m03 Clientid:01:52:54:00:67:33:5d}
	I0414 16:48:49.442755  170136 main.go:141] libmachine: (ha-304734-m03) DBG | domain ha-304734-m03 has defined IP address 192.168.39.37 and MAC address 52:54:00:67:33:5d in network mk-ha-304734
	I0414 16:48:49.442914  170136 host.go:66] Checking if "ha-304734-m03" exists ...
	I0414 16:48:49.443199  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.443232  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.456849  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34387
	I0414 16:48:49.457198  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.457801  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.457821  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.458189  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.458354  170136 main.go:141] libmachine: (ha-304734-m03) Calling .DriverName
	I0414 16:48:49.458529  170136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 16:48:49.458551  170136 main.go:141] libmachine: (ha-304734-m03) Calling .GetSSHHostname
	I0414 16:48:49.460556  170136 main.go:141] libmachine: (ha-304734-m03) DBG | domain ha-304734-m03 has defined MAC address 52:54:00:67:33:5d in network mk-ha-304734
	I0414 16:48:49.460964  170136 main.go:141] libmachine: (ha-304734-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:33:5d", ip: ""} in network mk-ha-304734: {Iface:virbr1 ExpiryTime:2025-04-14 17:45:03 +0000 UTC Type:0 Mac:52:54:00:67:33:5d Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-304734-m03 Clientid:01:52:54:00:67:33:5d}
	I0414 16:48:49.460996  170136 main.go:141] libmachine: (ha-304734-m03) DBG | domain ha-304734-m03 has defined IP address 192.168.39.37 and MAC address 52:54:00:67:33:5d in network mk-ha-304734
	I0414 16:48:49.461106  170136 main.go:141] libmachine: (ha-304734-m03) Calling .GetSSHPort
	I0414 16:48:49.461280  170136 main.go:141] libmachine: (ha-304734-m03) Calling .GetSSHKeyPath
	I0414 16:48:49.461439  170136 main.go:141] libmachine: (ha-304734-m03) Calling .GetSSHUsername
	I0414 16:48:49.461579  170136 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/ha-304734-m03/id_rsa Username:docker}
	I0414 16:48:49.545606  170136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 16:48:49.562874  170136 kubeconfig.go:125] found "ha-304734" server: "https://192.168.39.254:8443"
	I0414 16:48:49.562898  170136 api_server.go:166] Checking apiserver status ...
	I0414 16:48:49.562927  170136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 16:48:49.576947  170136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0414 16:48:49.586195  170136 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 16:48:49.586252  170136 ssh_runner.go:195] Run: ls
	I0414 16:48:49.591011  170136 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 16:48:49.595266  170136 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 16:48:49.595285  170136 status.go:463] ha-304734-m03 apiserver status = Running (err=<nil>)
	I0414 16:48:49.595293  170136 status.go:176] ha-304734-m03 status: &{Name:ha-304734-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 16:48:49.595310  170136 status.go:174] checking status of ha-304734-m04 ...
	I0414 16:48:49.595610  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.595652  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.610939  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0414 16:48:49.611339  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.611740  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.611760  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.612137  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.612303  170136 main.go:141] libmachine: (ha-304734-m04) Calling .GetState
	I0414 16:48:49.613777  170136 status.go:371] ha-304734-m04 host status = "Running" (err=<nil>)
	I0414 16:48:49.613791  170136 host.go:66] Checking if "ha-304734-m04" exists ...
	I0414 16:48:49.614190  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.614234  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.629137  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I0414 16:48:49.629600  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.630031  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.630062  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.630404  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.630594  170136 main.go:141] libmachine: (ha-304734-m04) Calling .GetIP
	I0414 16:48:49.633290  170136 main.go:141] libmachine: (ha-304734-m04) DBG | domain ha-304734-m04 has defined MAC address 52:54:00:95:55:f6 in network mk-ha-304734
	I0414 16:48:49.633674  170136 main.go:141] libmachine: (ha-304734-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:55:f6", ip: ""} in network mk-ha-304734: {Iface:virbr1 ExpiryTime:2025-04-14 17:46:25 +0000 UTC Type:0 Mac:52:54:00:95:55:f6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-304734-m04 Clientid:01:52:54:00:95:55:f6}
	I0414 16:48:49.633711  170136 main.go:141] libmachine: (ha-304734-m04) DBG | domain ha-304734-m04 has defined IP address 192.168.39.230 and MAC address 52:54:00:95:55:f6 in network mk-ha-304734
	I0414 16:48:49.633823  170136 host.go:66] Checking if "ha-304734-m04" exists ...
	I0414 16:48:49.634128  170136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 16:48:49.634161  170136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 16:48:49.648424  170136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0414 16:48:49.648775  170136 main.go:141] libmachine: () Calling .GetVersion
	I0414 16:48:49.649168  170136 main.go:141] libmachine: Using API Version  1
	I0414 16:48:49.649188  170136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 16:48:49.649473  170136 main.go:141] libmachine: () Calling .GetMachineName
	I0414 16:48:49.649650  170136 main.go:141] libmachine: (ha-304734-m04) Calling .DriverName
	I0414 16:48:49.649804  170136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 16:48:49.649821  170136 main.go:141] libmachine: (ha-304734-m04) Calling .GetSSHHostname
	I0414 16:48:49.652290  170136 main.go:141] libmachine: (ha-304734-m04) DBG | domain ha-304734-m04 has defined MAC address 52:54:00:95:55:f6 in network mk-ha-304734
	I0414 16:48:49.652687  170136 main.go:141] libmachine: (ha-304734-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:55:f6", ip: ""} in network mk-ha-304734: {Iface:virbr1 ExpiryTime:2025-04-14 17:46:25 +0000 UTC Type:0 Mac:52:54:00:95:55:f6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-304734-m04 Clientid:01:52:54:00:95:55:f6}
	I0414 16:48:49.652704  170136 main.go:141] libmachine: (ha-304734-m04) DBG | domain ha-304734-m04 has defined IP address 192.168.39.230 and MAC address 52:54:00:95:55:f6 in network mk-ha-304734
	I0414 16:48:49.652879  170136 main.go:141] libmachine: (ha-304734-m04) Calling .GetSSHPort
	I0414 16:48:49.653021  170136 main.go:141] libmachine: (ha-304734-m04) Calling .GetSSHKeyPath
	I0414 16:48:49.653151  170136 main.go:141] libmachine: (ha-304734-m04) Calling .GetSSHUsername
	I0414 16:48:49.653292  170136 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/ha-304734-m04/id_rsa Username:docker}
	I0414 16:48:49.738421  170136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 16:48:49.754527  170136 status.go:176] ha-304734-m04 status: &{Name:ha-304734-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-304734 node start m02 -v=7 --alsologtostderr: (49.877846094s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (50.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (437.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-304734 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-304734 -v=7 --alsologtostderr
E0414 16:49:52.808991  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:52:08.945926  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:52:36.651313  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 16:53:31.089647  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-304734 -v=7 --alsologtostderr: (4m34.212535876s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-304734 --wait=true -v=7 --alsologtostderr
E0414 16:54:54.162781  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-304734 --wait=true -v=7 --alsologtostderr: (2m43.430154911s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-304734
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (437.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 node delete m03 -v=7 --alsologtostderr
E0414 16:57:08.950094  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-304734 node delete m03 -v=7 --alsologtostderr: (17.448277002s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 stop -v=7 --alsologtostderr
E0414 16:58:31.090082  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-304734 stop -v=7 --alsologtostderr: (4m32.761010729s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr: exit status 7 (109.897174ms)

                                                
                                                
-- stdout --
	ha-304734
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-304734-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-304734-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:01:51.331949  174443 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:01:51.332064  174443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:01:51.332073  174443 out.go:358] Setting ErrFile to fd 2...
	I0414 17:01:51.332077  174443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:01:51.332251  174443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:01:51.332422  174443 out.go:352] Setting JSON to false
	I0414 17:01:51.332451  174443 mustload.go:65] Loading cluster: ha-304734
	I0414 17:01:51.332575  174443 notify.go:220] Checking for updates...
	I0414 17:01:51.332875  174443 config.go:182] Loaded profile config "ha-304734": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:01:51.332905  174443 status.go:174] checking status of ha-304734 ...
	I0414 17:01:51.333394  174443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:01:51.333446  174443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:01:51.360220  174443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0414 17:01:51.360661  174443 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:01:51.361202  174443 main.go:141] libmachine: Using API Version  1
	I0414 17:01:51.361225  174443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:01:51.361590  174443 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:01:51.361785  174443 main.go:141] libmachine: (ha-304734) Calling .GetState
	I0414 17:01:51.363257  174443 status.go:371] ha-304734 host status = "Stopped" (err=<nil>)
	I0414 17:01:51.363274  174443 status.go:384] host is not running, skipping remaining checks
	I0414 17:01:51.363281  174443 status.go:176] ha-304734 status: &{Name:ha-304734 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:01:51.363320  174443 status.go:174] checking status of ha-304734-m02 ...
	I0414 17:01:51.363597  174443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:01:51.363632  174443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:01:51.377923  174443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0414 17:01:51.378359  174443 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:01:51.378733  174443 main.go:141] libmachine: Using API Version  1
	I0414 17:01:51.378748  174443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:01:51.379038  174443 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:01:51.379216  174443 main.go:141] libmachine: (ha-304734-m02) Calling .GetState
	I0414 17:01:51.380701  174443 status.go:371] ha-304734-m02 host status = "Stopped" (err=<nil>)
	I0414 17:01:51.380716  174443 status.go:384] host is not running, skipping remaining checks
	I0414 17:01:51.380721  174443 status.go:176] ha-304734-m02 status: &{Name:ha-304734-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:01:51.380737  174443 status.go:174] checking status of ha-304734-m04 ...
	I0414 17:01:51.381003  174443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:01:51.381032  174443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:01:51.395063  174443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0414 17:01:51.395417  174443 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:01:51.395814  174443 main.go:141] libmachine: Using API Version  1
	I0414 17:01:51.395831  174443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:01:51.396133  174443 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:01:51.396295  174443 main.go:141] libmachine: (ha-304734-m04) Calling .GetState
	I0414 17:01:51.397466  174443 status.go:371] ha-304734-m04 host status = "Stopped" (err=<nil>)
	I0414 17:01:51.397477  174443 status.go:384] host is not running, skipping remaining checks
	I0414 17:01:51.397482  174443 status.go:176] ha-304734-m04 status: &{Name:ha-304734-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-304734 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 17:02:08.946043  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:03:31.090619  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:03:32.013664  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-304734 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m59.297648836s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-304734 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-304734 --control-plane -v=7 --alsologtostderr: (1m14.340836245s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-304734 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-295153 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-295153 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.758603377s)
--- PASS: TestJSONOutput/start/Command (80.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-295153 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-295153 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-295153 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-295153 --output=json --user=testUser: (7.321529278s)
--- PASS: TestJSONOutput/stop/Command (7.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-113463 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-113463 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.219773ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b25d0d3f-1343-416e-a805-fe7b59319769","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-113463] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9cffd0ef-19a3-4dd8-bbbe-beb95a16e690","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20349"}}
	{"specversion":"1.0","id":"d01e7397-ef18-4962-96d9-fe66bbcc4c5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"30b59e04-eb86-4843-b996-66b70453b693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig"}}
	{"specversion":"1.0","id":"81ae23f9-2e25-44f8-b8c2-bf6ddad2e1f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube"}}
	{"specversion":"1.0","id":"a5379d96-4b4d-421f-ace4-1beaef352758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aa6f2e30-0d07-4d8e-be96-a476b7b7caa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9e1f4181-e574-45a7-97f7-28cd54d40c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-113463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-113463
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-093135 --driver=kvm2  --container-runtime=crio
E0414 17:07:08.950102  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-093135 --driver=kvm2  --container-runtime=crio: (38.513295872s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-107636 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-107636 --driver=kvm2  --container-runtime=crio: (42.967087654s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-093135
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-107636
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-107636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-107636
helpers_test.go:175: Cleaning up "first-093135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-093135
--- PASS: TestMinikubeProfile (84.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-650589 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0414 17:08:31.091279  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-650589 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.499398912s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-650589 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-650589 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-670969 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-670969 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.956141215s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670969 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670969 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-650589 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670969 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670969 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-670969
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-670969: (1.264680934s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-670969
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-670969: (20.403356229s)
--- PASS: TestMountStart/serial/RestartStopped (21.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670969 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670969 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-326457 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-326457 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.425164382s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-326457 -- rollout status deployment/busybox: (3.673611454s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-bdmpk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-jvngs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-bdmpk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-jvngs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-bdmpk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-jvngs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-bdmpk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-bdmpk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-jvngs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-326457 -- exec busybox-58667487b6-jvngs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-326457 -v 3 --alsologtostderr
E0414 17:11:34.164361  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-326457 -v 3 --alsologtostderr: (44.454581072s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status --alsologtostderr
E0414 17:12:08.946399  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/AddNode (45.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-326457 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp testdata/cp-test.txt multinode-326457:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile396573983/001/cp-test_multinode-326457.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457:/home/docker/cp-test.txt multinode-326457-m02:/home/docker/cp-test_multinode-326457_multinode-326457-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m02 "sudo cat /home/docker/cp-test_multinode-326457_multinode-326457-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457:/home/docker/cp-test.txt multinode-326457-m03:/home/docker/cp-test_multinode-326457_multinode-326457-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m03 "sudo cat /home/docker/cp-test_multinode-326457_multinode-326457-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp testdata/cp-test.txt multinode-326457-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile396573983/001/cp-test_multinode-326457-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457-m02:/home/docker/cp-test.txt multinode-326457:/home/docker/cp-test_multinode-326457-m02_multinode-326457.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457 "sudo cat /home/docker/cp-test_multinode-326457-m02_multinode-326457.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457-m02:/home/docker/cp-test.txt multinode-326457-m03:/home/docker/cp-test_multinode-326457-m02_multinode-326457-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m03 "sudo cat /home/docker/cp-test_multinode-326457-m02_multinode-326457-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp testdata/cp-test.txt multinode-326457-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile396573983/001/cp-test_multinode-326457-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457-m03:/home/docker/cp-test.txt multinode-326457:/home/docker/cp-test_multinode-326457-m03_multinode-326457.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457 "sudo cat /home/docker/cp-test_multinode-326457-m03_multinode-326457.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 cp multinode-326457-m03:/home/docker/cp-test.txt multinode-326457-m02:/home/docker/cp-test_multinode-326457-m03_multinode-326457-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 ssh -n multinode-326457-m02 "sudo cat /home/docker/cp-test_multinode-326457-m03_multinode-326457-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-326457 node stop m03: (1.500174501s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-326457 status: exit status 7 (403.076174ms)

                                                
                                                
-- stdout --
	multinode-326457
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-326457-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-326457-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-326457 status --alsologtostderr: exit status 7 (410.691906ms)

                                                
                                                
-- stdout --
	multinode-326457
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-326457-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-326457-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:12:18.790864  182208 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:12:18.790965  182208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:12:18.790973  182208 out.go:358] Setting ErrFile to fd 2...
	I0414 17:12:18.790978  182208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:12:18.791145  182208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:12:18.791301  182208 out.go:352] Setting JSON to false
	I0414 17:12:18.791332  182208 mustload.go:65] Loading cluster: multinode-326457
	I0414 17:12:18.791377  182208 notify.go:220] Checking for updates...
	I0414 17:12:18.791937  182208 config.go:182] Loaded profile config "multinode-326457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:12:18.791966  182208 status.go:174] checking status of multinode-326457 ...
	I0414 17:12:18.792446  182208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:12:18.792496  182208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:12:18.808357  182208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0414 17:12:18.808764  182208 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:12:18.809450  182208 main.go:141] libmachine: Using API Version  1
	I0414 17:12:18.809485  182208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:12:18.809800  182208 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:12:18.810150  182208 main.go:141] libmachine: (multinode-326457) Calling .GetState
	I0414 17:12:18.811665  182208 status.go:371] multinode-326457 host status = "Running" (err=<nil>)
	I0414 17:12:18.811682  182208 host.go:66] Checking if "multinode-326457" exists ...
	I0414 17:12:18.811961  182208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:12:18.812001  182208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:12:18.826578  182208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42225
	I0414 17:12:18.827015  182208 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:12:18.827408  182208 main.go:141] libmachine: Using API Version  1
	I0414 17:12:18.827425  182208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:12:18.827750  182208 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:12:18.827897  182208 main.go:141] libmachine: (multinode-326457) Calling .GetIP
	I0414 17:12:18.830271  182208 main.go:141] libmachine: (multinode-326457) DBG | domain multinode-326457 has defined MAC address 52:54:00:9f:6a:47 in network mk-multinode-326457
	I0414 17:12:18.830601  182208 main.go:141] libmachine: (multinode-326457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:6a:47", ip: ""} in network mk-multinode-326457: {Iface:virbr1 ExpiryTime:2025-04-14 18:09:40 +0000 UTC Type:0 Mac:52:54:00:9f:6a:47 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:multinode-326457 Clientid:01:52:54:00:9f:6a:47}
	I0414 17:12:18.830627  182208 main.go:141] libmachine: (multinode-326457) DBG | domain multinode-326457 has defined IP address 192.168.39.76 and MAC address 52:54:00:9f:6a:47 in network mk-multinode-326457
	I0414 17:12:18.830723  182208 host.go:66] Checking if "multinode-326457" exists ...
	I0414 17:12:18.831015  182208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:12:18.831053  182208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:12:18.845165  182208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0414 17:12:18.845586  182208 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:12:18.846091  182208 main.go:141] libmachine: Using API Version  1
	I0414 17:12:18.846118  182208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:12:18.846421  182208 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:12:18.846600  182208 main.go:141] libmachine: (multinode-326457) Calling .DriverName
	I0414 17:12:18.846795  182208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 17:12:18.846820  182208 main.go:141] libmachine: (multinode-326457) Calling .GetSSHHostname
	I0414 17:12:18.849441  182208 main.go:141] libmachine: (multinode-326457) DBG | domain multinode-326457 has defined MAC address 52:54:00:9f:6a:47 in network mk-multinode-326457
	I0414 17:12:18.849861  182208 main.go:141] libmachine: (multinode-326457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:6a:47", ip: ""} in network mk-multinode-326457: {Iface:virbr1 ExpiryTime:2025-04-14 18:09:40 +0000 UTC Type:0 Mac:52:54:00:9f:6a:47 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:multinode-326457 Clientid:01:52:54:00:9f:6a:47}
	I0414 17:12:18.849883  182208 main.go:141] libmachine: (multinode-326457) DBG | domain multinode-326457 has defined IP address 192.168.39.76 and MAC address 52:54:00:9f:6a:47 in network mk-multinode-326457
	I0414 17:12:18.850021  182208 main.go:141] libmachine: (multinode-326457) Calling .GetSSHPort
	I0414 17:12:18.850197  182208 main.go:141] libmachine: (multinode-326457) Calling .GetSSHKeyPath
	I0414 17:12:18.850382  182208 main.go:141] libmachine: (multinode-326457) Calling .GetSSHUsername
	I0414 17:12:18.850544  182208 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/multinode-326457/id_rsa Username:docker}
	I0414 17:12:18.929601  182208 ssh_runner.go:195] Run: systemctl --version
	I0414 17:12:18.936845  182208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:12:18.953701  182208 kubeconfig.go:125] found "multinode-326457" server: "https://192.168.39.76:8443"
	I0414 17:12:18.953735  182208 api_server.go:166] Checking apiserver status ...
	I0414 17:12:18.953775  182208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 17:12:18.969565  182208 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup
	W0414 17:12:18.980535  182208 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 17:12:18.980584  182208 ssh_runner.go:195] Run: ls
	I0414 17:12:18.985173  182208 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0414 17:12:18.989704  182208 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0414 17:12:18.989728  182208 status.go:463] multinode-326457 apiserver status = Running (err=<nil>)
	I0414 17:12:18.989740  182208 status.go:176] multinode-326457 status: &{Name:multinode-326457 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:12:18.989756  182208 status.go:174] checking status of multinode-326457-m02 ...
	I0414 17:12:18.990132  182208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:12:18.990171  182208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:12:19.005115  182208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I0414 17:12:19.005559  182208 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:12:19.006046  182208 main.go:141] libmachine: Using API Version  1
	I0414 17:12:19.006068  182208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:12:19.006368  182208 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:12:19.006533  182208 main.go:141] libmachine: (multinode-326457-m02) Calling .GetState
	I0414 17:12:19.007909  182208 status.go:371] multinode-326457-m02 host status = "Running" (err=<nil>)
	I0414 17:12:19.007923  182208 host.go:66] Checking if "multinode-326457-m02" exists ...
	I0414 17:12:19.008211  182208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:12:19.008242  182208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:12:19.022574  182208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34565
	I0414 17:12:19.022917  182208 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:12:19.023317  182208 main.go:141] libmachine: Using API Version  1
	I0414 17:12:19.023337  182208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:12:19.023592  182208 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:12:19.023730  182208 main.go:141] libmachine: (multinode-326457-m02) Calling .GetIP
	I0414 17:12:19.026147  182208 main.go:141] libmachine: (multinode-326457-m02) DBG | domain multinode-326457-m02 has defined MAC address 52:54:00:71:dd:a1 in network mk-multinode-326457
	I0414 17:12:19.026583  182208 main.go:141] libmachine: (multinode-326457-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:dd:a1", ip: ""} in network mk-multinode-326457: {Iface:virbr1 ExpiryTime:2025-04-14 18:10:44 +0000 UTC Type:0 Mac:52:54:00:71:dd:a1 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:multinode-326457-m02 Clientid:01:52:54:00:71:dd:a1}
	I0414 17:12:19.026606  182208 main.go:141] libmachine: (multinode-326457-m02) DBG | domain multinode-326457-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:71:dd:a1 in network mk-multinode-326457
	I0414 17:12:19.026752  182208 host.go:66] Checking if "multinode-326457-m02" exists ...
	I0414 17:12:19.027022  182208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:12:19.027056  182208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:12:19.041495  182208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0414 17:12:19.041879  182208 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:12:19.042201  182208 main.go:141] libmachine: Using API Version  1
	I0414 17:12:19.042218  182208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:12:19.042545  182208 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:12:19.042682  182208 main.go:141] libmachine: (multinode-326457-m02) Calling .DriverName
	I0414 17:12:19.042824  182208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 17:12:19.042846  182208 main.go:141] libmachine: (multinode-326457-m02) Calling .GetSSHHostname
	I0414 17:12:19.045076  182208 main.go:141] libmachine: (multinode-326457-m02) DBG | domain multinode-326457-m02 has defined MAC address 52:54:00:71:dd:a1 in network mk-multinode-326457
	I0414 17:12:19.045471  182208 main.go:141] libmachine: (multinode-326457-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:dd:a1", ip: ""} in network mk-multinode-326457: {Iface:virbr1 ExpiryTime:2025-04-14 18:10:44 +0000 UTC Type:0 Mac:52:54:00:71:dd:a1 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:multinode-326457-m02 Clientid:01:52:54:00:71:dd:a1}
	I0414 17:12:19.045509  182208 main.go:141] libmachine: (multinode-326457-m02) DBG | domain multinode-326457-m02 has defined IP address 192.168.39.53 and MAC address 52:54:00:71:dd:a1 in network mk-multinode-326457
	I0414 17:12:19.045604  182208 main.go:141] libmachine: (multinode-326457-m02) Calling .GetSSHPort
	I0414 17:12:19.045757  182208 main.go:141] libmachine: (multinode-326457-m02) Calling .GetSSHKeyPath
	I0414 17:12:19.045904  182208 main.go:141] libmachine: (multinode-326457-m02) Calling .GetSSHUsername
	I0414 17:12:19.046009  182208 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/multinode-326457-m02/id_rsa Username:docker}
	I0414 17:12:19.124554  182208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 17:12:19.138235  182208 status.go:176] multinode-326457-m02 status: &{Name:multinode-326457-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:12:19.138269  182208 status.go:174] checking status of multinode-326457-m03 ...
	I0414 17:12:19.138621  182208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:12:19.138671  182208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:12:19.154189  182208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33207
	I0414 17:12:19.154654  182208 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:12:19.155193  182208 main.go:141] libmachine: Using API Version  1
	I0414 17:12:19.155215  182208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:12:19.155503  182208 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:12:19.155654  182208 main.go:141] libmachine: (multinode-326457-m03) Calling .GetState
	I0414 17:12:19.157063  182208 status.go:371] multinode-326457-m03 host status = "Stopped" (err=<nil>)
	I0414 17:12:19.157083  182208 status.go:384] host is not running, skipping remaining checks
	I0414 17:12:19.157088  182208 status.go:176] multinode-326457-m03 status: &{Name:multinode-326457-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-326457 node start m03 -v=7 --alsologtostderr: (35.883147714s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (338.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-326457
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-326457
E0414 17:13:31.089988  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-326457: (3m3.056175587s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-326457 --wait=true -v=8 --alsologtostderr
E0414 17:17:08.946547  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:18:31.090023  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-326457 --wait=true -v=8 --alsologtostderr: (2m35.419540647s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-326457
--- PASS: TestMultiNode/serial/RestartKeepsNodes (338.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-326457 node delete m03: (2.239959539s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 stop
E0414 17:20:12.015508  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-326457 stop: (3m1.853764193s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-326457 status: exit status 7 (81.264027ms)

                                                
                                                
-- stdout --
	multinode-326457
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-326457-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-326457 status --alsologtostderr: exit status 7 (82.653927ms)

                                                
                                                
-- stdout --
	multinode-326457
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-326457-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:21:38.985348  185657 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:21:38.985476  185657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:21:38.985488  185657 out.go:358] Setting ErrFile to fd 2...
	I0414 17:21:38.985496  185657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:21:38.985690  185657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:21:38.985931  185657 out.go:352] Setting JSON to false
	I0414 17:21:38.985973  185657 mustload.go:65] Loading cluster: multinode-326457
	I0414 17:21:38.986083  185657 notify.go:220] Checking for updates...
	I0414 17:21:38.986496  185657 config.go:182] Loaded profile config "multinode-326457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:21:38.986522  185657 status.go:174] checking status of multinode-326457 ...
	I0414 17:21:38.987221  185657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:21:38.987283  185657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:21:39.002281  185657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0414 17:21:39.002661  185657 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:21:39.003259  185657 main.go:141] libmachine: Using API Version  1
	I0414 17:21:39.003294  185657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:21:39.003621  185657 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:21:39.003825  185657 main.go:141] libmachine: (multinode-326457) Calling .GetState
	I0414 17:21:39.005352  185657 status.go:371] multinode-326457 host status = "Stopped" (err=<nil>)
	I0414 17:21:39.005364  185657 status.go:384] host is not running, skipping remaining checks
	I0414 17:21:39.005369  185657 status.go:176] multinode-326457 status: &{Name:multinode-326457 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 17:21:39.005389  185657 status.go:174] checking status of multinode-326457-m02 ...
	I0414 17:21:39.005655  185657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 17:21:39.005712  185657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 17:21:39.020155  185657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0414 17:21:39.020514  185657 main.go:141] libmachine: () Calling .GetVersion
	I0414 17:21:39.020959  185657 main.go:141] libmachine: Using API Version  1
	I0414 17:21:39.020984  185657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 17:21:39.021310  185657 main.go:141] libmachine: () Calling .GetMachineName
	I0414 17:21:39.021470  185657 main.go:141] libmachine: (multinode-326457-m02) Calling .GetState
	I0414 17:21:39.022775  185657 status.go:371] multinode-326457-m02 host status = "Stopped" (err=<nil>)
	I0414 17:21:39.022789  185657 status.go:384] host is not running, skipping remaining checks
	I0414 17:21:39.022794  185657 status.go:176] multinode-326457-m02 status: &{Name:multinode-326457-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (114.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-326457 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 17:22:08.946579  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:23:31.089810  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-326457 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.105968361s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-326457 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (114.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-326457
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-326457-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-326457-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.261406ms)

                                                
                                                
-- stdout --
	* [multinode-326457-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-326457-m02' is duplicated with machine name 'multinode-326457-m02' in profile 'multinode-326457'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-326457-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-326457-m03 --driver=kvm2  --container-runtime=crio: (42.523596125s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-326457
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-326457: exit status 80 (201.625078ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-326457 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-326457-m03 already exists in multinode-326457-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-326457-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.62s)

                                                
                                    
x
+
TestScheduledStopUnix (114.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-664908 --memory=2048 --driver=kvm2  --container-runtime=crio
E0414 17:27:08.952384  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-664908 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.408457932s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-664908 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-664908 -n scheduled-stop-664908
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-664908 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 17:27:45.049129  156633 retry.go:31] will retry after 55.938µs: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.050280  156633 retry.go:31] will retry after 135.401µs: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.051423  156633 retry.go:31] will retry after 242.654µs: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.052554  156633 retry.go:31] will retry after 445.268µs: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.053695  156633 retry.go:31] will retry after 499.122µs: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.054838  156633 retry.go:31] will retry after 495.785µs: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.055979  156633 retry.go:31] will retry after 1.380254ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.058181  156633 retry.go:31] will retry after 2.020266ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.060310  156633 retry.go:31] will retry after 3.824937ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.064505  156633 retry.go:31] will retry after 2.313779ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.067698  156633 retry.go:31] will retry after 3.499603ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.071962  156633 retry.go:31] will retry after 7.501772ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.080181  156633 retry.go:31] will retry after 19.077444ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.099363  156633 retry.go:31] will retry after 10.868136ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
I0414 17:27:45.110622  156633 retry.go:31] will retry after 20.521927ms: open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/scheduled-stop-664908/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-664908 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-664908 -n scheduled-stop-664908
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-664908
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-664908 --schedule 15s
E0414 17:28:14.168710  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0414 17:28:31.091200  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-664908
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-664908: exit status 7 (61.517467ms)

                                                
                                                
-- stdout --
	scheduled-stop-664908
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-664908 -n scheduled-stop-664908
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-664908 -n scheduled-stop-664908: exit status 7 (64.827602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-664908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-664908
--- PASS: TestScheduledStopUnix (114.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (225.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2737292239 start -p running-upgrade-912198 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2737292239 start -p running-upgrade-912198 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.328396212s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-912198 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-912198 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.933681501s)
helpers_test.go:175: Cleaning up "running-upgrade-912198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-912198
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-912198: (1.231561557s)
--- PASS: TestRunningBinaryUpgrade (225.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-900958 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-900958 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (70.070718ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-900958] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-900958 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-900958 --driver=kvm2  --container-runtime=crio: (1m34.321346567s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-900958 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (72.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-900958 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-900958 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m11.500014749s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-900958 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-900958 status -o json: exit status 2 (261.45426ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-900958","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-900958
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (72.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (59.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-900958 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-900958 --no-kubernetes --driver=kvm2  --container-runtime=crio: (59.492685871s)
--- PASS: TestNoKubernetes/serial/Start (59.49s)

                                                
                                    
x
+
TestPause/serial/Start (86.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-439119 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-439119 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m26.254312843s)
--- PASS: TestPause/serial/Start (86.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-993774 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-993774 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (91.82527ms)

                                                
                                                
-- stdout --
	* [false-993774] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 17:32:42.333247  193092 out.go:345] Setting OutFile to fd 1 ...
	I0414 17:32:42.333346  193092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:32:42.333354  193092 out.go:358] Setting ErrFile to fd 2...
	I0414 17:32:42.333358  193092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 17:32:42.333538  193092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
	I0414 17:32:42.334090  193092 out.go:352] Setting JSON to false
	I0414 17:32:42.334914  193092 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8060,"bootTime":1744643902,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 17:32:42.334966  193092 start.go:139] virtualization: kvm guest
	I0414 17:32:42.336605  193092 out.go:177] * [false-993774] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 17:32:42.337591  193092 notify.go:220] Checking for updates...
	I0414 17:32:42.337628  193092 out.go:177]   - MINIKUBE_LOCATION=20349
	I0414 17:32:42.338733  193092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 17:32:42.339951  193092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
	I0414 17:32:42.340922  193092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
	I0414 17:32:42.341940  193092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 17:32:42.343018  193092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 17:32:42.344612  193092 config.go:182] Loaded profile config "NoKubernetes-900958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0414 17:32:42.344760  193092 config.go:182] Loaded profile config "cert-expiration-560919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:32:42.344915  193092 config.go:182] Loaded profile config "pause-439119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 17:32:42.345026  193092 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 17:32:42.377931  193092 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 17:32:42.378964  193092 start.go:297] selected driver: kvm2
	I0414 17:32:42.378978  193092 start.go:901] validating driver "kvm2" against <nil>
	I0414 17:32:42.378988  193092 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 17:32:42.380884  193092 out.go:201] 
	W0414 17:32:42.381861  193092 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0414 17:32:42.382867  193092 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-993774 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-993774" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 17:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.83:8443
name: cert-expiration-560919
contexts:
- context:
cluster: cert-expiration-560919
extensions:
- extension:
last-update: Mon, 14 Apr 2025 17:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-560919
name: cert-expiration-560919
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-560919
user:
client-certificate: /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt
client-key: /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-993774

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-993774"

                                                
                                                
----------------------- debugLogs end: false-993774 [took: 2.97568665s] --------------------------------
helpers_test.go:175: Cleaning up "false-993774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-993774
--- PASS: TestNetworkPlugins/group/false (3.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-900958 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-900958 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.441939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-900958
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-900958: (1.448624836s)
--- PASS: TestNoKubernetes/serial/Stop (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-900958 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-900958 --driver=kvm2  --container-runtime=crio: (44.027820092s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-900958 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-900958 "sudo systemctl is-active --quiet service kubelet": exit status 1 (223.176299ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0414 17:33:31.089478  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2847913874 start -p stopped-upgrade-328583 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2847913874 start -p stopped-upgrade-328583 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m18.143466203s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2847913874 -p stopped-upgrade-328583 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2847913874 -p stopped-upgrade-328583 stop: (2.147121537s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-328583 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-328583 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.401963964s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (56.146439608s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m12.163807982s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-328583
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.399011518s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-993774 "pgrep -a kubelet"
I0414 17:36:11.810994  156633 config.go:182] Loaded profile config "auto-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-993774 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-993774 replace --force -f testdata/netcat-deployment.yaml: (2.08485266s)
I0414 17:36:14.405612  156633 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fknxj" [d1e0a030-deca-41ec-aa02-5285168b70aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fknxj" [d1e0a030-deca-41ec-aa02-5285168b70aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.010016052s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-993774 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.656688346s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f4p6n" [73344336-5576-420c-b15e-75b8b16675ba] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006016707s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-993774 "pgrep -a kubelet"
E0414 17:36:52.017523  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
I0414 17:36:52.036830  156633 config.go:182] Loaded profile config "kindnet-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-993774 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qh9zv" [93b194a3-633d-4508-bfdb-ee8166826051] Pending
helpers_test.go:344: "netcat-5d86dc444-qh9zv" [93b194a3-633d-4508-bfdb-ee8166826051] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-qh9zv" [93b194a3-633d-4508-bfdb-ee8166826051] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.003865386s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-993774 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m21.830344851s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-s8xn6" [1c54b9c2-f486-4360-a38b-9f15c5749eff] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003796294s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-993774 "pgrep -a kubelet"
I0414 17:37:29.368718  156633 config.go:182] Loaded profile config "calico-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-993774 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nkwms" [2af2e485-04bc-421e-a2bb-261bd03acd5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-nkwms" [2af2e485-04bc-421e-a2bb-261bd03acd5c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004525353s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-993774 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-993774 "pgrep -a kubelet"
I0414 17:37:51.807439  156633 config.go:182] Loaded profile config "custom-flannel-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-993774 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vwx2r" [56f2cd8b-461a-4c3a-8c29-d11162cd1189] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vwx2r" [56f2cd8b-461a-4c3a-8c29-d11162cd1189] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004911631s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m11.143054677s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-993774 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0414 17:38:31.090562  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-993774 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m21.084630032s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-993774 "pgrep -a kubelet"
I0414 17:38:44.906454  156633 config.go:182] Loaded profile config "enable-default-cni-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-993774 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5gnqk" [22092f24-fe1d-4057-97e5-dbb2d6fd64af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5gnqk" [22092f24-fe1d-4057-97e5-dbb2d6fd64af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003239788s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-993774 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gzhkw" [f4d72ed6-7164-47f6-9521-8be54758eb2e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004542552s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-993774 "pgrep -a kubelet"
I0414 17:39:14.872444  156633 config.go:182] Loaded profile config "flannel-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-993774 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-993774 replace --force -f testdata/netcat-deployment.yaml: (1.409286232s)
I0414 17:39:16.323544  156633 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0414 17:39:16.324882  156633 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ldwfw" [dfad5543-ce6b-471c-b8fc-b5d89f4ccf1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ldwfw" [dfad5543-ce6b-471c-b8fc-b5d89f4ccf1b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006833939s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-993774 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-993774 "pgrep -a kubelet"
I0414 17:39:42.555430  156633 config.go:182] Loaded profile config "bridge-993774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-993774 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ltsm7" [55b1d6e3-be34-4db8-bf12-7c0cc3832364] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ltsm7" [55b1d6e3-be34-4db8-bf12-7c0cc3832364] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004608828s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (106.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-721806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-721806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m46.40229471s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (106.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-061428 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-061428 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m42.756353594s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-993774 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-993774 exec deployment/netcat -- nslookup kubernetes.default: (10.165574269s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-993774 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E0414 17:49:36.308200  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-616953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 17:41:13.898739  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:13.905147  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:13.916543  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:13.937955  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:13.979396  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:14.060808  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:14.222404  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:14.544236  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:15.186317  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:16.468557  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:19.030148  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-616953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (58.596350789s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-616953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-616953 --alsologtostderr -v=3
E0414 17:41:24.152217  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-616953 --alsologtostderr -v=3: (11.337088501s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-721806 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [554e1c60-cba8-44fe-8fc0-e2a4767a0251] Pending
helpers_test.go:344: "busybox" [554e1c60-cba8-44fe-8fc0-e2a4767a0251] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [554e1c60-cba8-44fe-8fc0-e2a4767a0251] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004229018s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-721806 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-616953 -n newest-cni-616953
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-616953 -n newest-cni-616953: exit status 7 (62.883836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-616953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-616953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 17:41:34.393892  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-616953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (36.477946406s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-616953 -n newest-cni-616953
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-061428 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4b88649e-37df-4cf9-87d6-dc25301eae38] Pending
helpers_test.go:344: "busybox" [4b88649e-37df-4cf9-87d6-dc25301eae38] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4b88649e-37df-4cf9-87d6-dc25301eae38] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00408064s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-061428 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-721806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-721806 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-721806 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-721806 --alsologtostderr -v=3: (1m31.062221861s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-061428 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0414 17:41:45.781986  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:45.788455  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:45.799929  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:45.821411  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:45.862729  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:45.944163  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:46.105770  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:46.427321  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-061428 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-061428 --alsologtostderr -v=3
E0414 17:41:47.068720  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:48.350355  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:50.912458  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:54.875342  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:41:56.034589  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:06.276362  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:08.946658  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/functional-207815/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-061428 --alsologtostderr -v=3: (1m31.387768128s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-616953 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-616953 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-616953 -n newest-cni-616953
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-616953 -n newest-cni-616953: exit status 2 (305.328455ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-616953 -n newest-cni-616953
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-616953 -n newest-cni-616953: exit status 2 (294.418086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-616953 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-616953 -n newest-cni-616953
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-616953 -n newest-cni-616953
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-418468 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 17:42:23.150409  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:23.156795  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:23.168130  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:23.189432  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:23.230788  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:23.312236  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:23.473735  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:23.795596  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:24.436957  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:25.718371  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:26.757897  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:28.279704  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:33.401756  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:35.836620  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/auto-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:43.643969  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:52.062715  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:52.069091  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:52.080396  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:52.101764  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:52.143157  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:52.224675  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:52.386106  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:52.707836  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:53.349543  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:54.631867  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:42:57.193390  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:02.315710  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:04.125890  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:07.720138  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/kindnet-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-418468 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m20.534737775s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-721806 -n no-preload-721806
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-721806 -n no-preload-721806: exit status 7 (63.997766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-721806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (388.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-721806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 17:43:12.557879  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-721806 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (6m27.905922625s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-721806 -n no-preload-721806
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (388.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428: exit status 7 (64.915169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-061428 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-061428 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 17:43:31.089979  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:33.039239  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/custom-flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-061428 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m46.435577494s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-418468 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f4eac806-6ef5-4827-bb38-0608d3473531] Pending
helpers_test.go:344: "busybox" [f4eac806-6ef5-4827-bb38-0608d3473531] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f4eac806-6ef5-4827-bb38-0608d3473531] Running
E0414 17:43:45.088035  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/calico-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:45.219588  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:45.225900  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:45.237202  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:45.258529  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:45.299856  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:45.381280  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:45.542782  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:45.864657  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0047132s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-418468 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-418468 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0414 17:43:46.506475  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-418468 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083198766s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-418468 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-418468 --alsologtostderr -v=3
E0414 17:43:47.788181  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
E0414 17:43:50.349975  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-418468 --alsologtostderr -v=3: (1m31.180123324s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-418468 -n embed-certs-418468
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-418468 -n embed-certs-418468: exit status 7 (75.385805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-418468 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-418468 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-418468 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m35.630589982s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-418468 -n embed-certs-418468
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-768580 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-768580 --alsologtostderr -v=3: (3.300722283s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-768580 -n old-k8s-version-768580: exit status 7 (64.150452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-768580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5642d" [e58c9061-02fb-4456-82f5-690790c73ebc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0414 17:49:08.605962  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/flannel-993774/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5642d" [e58c9061-02fb-4456-82f5-690790c73ebc] Running
E0414 17:49:12.919889  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/enable-default-cni-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005289292s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5642d" [e58c9061-02fb-4456-82f5-690790c73ebc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004780564s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-061428 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-061428 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-061428 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428: exit status 2 (248.081245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428: exit status 2 (243.891712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-061428 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-061428 -n default-k8s-diff-port-061428
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-c9frt" [5cf73c44-161c-4f20-b278-ebb031b20d63] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-c9frt" [5cf73c44-161c-4f20-b278-ebb031b20d63] Running
E0414 17:49:42.840852  156633 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/bridge-993774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004427737s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-c9frt" [5cf73c44-161c-4f20-b278-ebb031b20d63] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004534656s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-721806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-721806 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-721806 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-721806 -n no-preload-721806
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-721806 -n no-preload-721806: exit status 2 (229.791147ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-721806 -n no-preload-721806
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-721806 -n no-preload-721806: exit status 2 (242.197407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-721806 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-721806 -n no-preload-721806
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-721806 -n no-preload-721806
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hjlft" [0f57b85a-2951-4815-80dc-a5e812ad3b8b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hjlft" [0f57b85a-2951-4815-80dc-a5e812ad3b8b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.00344996s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hjlft" [0f57b85a-2951-4815-80dc-a5e812ad3b8b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003549167s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-418468 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-418468 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-418468 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-418468 -n embed-certs-418468
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-418468 -n embed-certs-418468: exit status 2 (238.807164ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-418468 -n embed-certs-418468
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-418468 -n embed-certs-418468: exit status 2 (241.210854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-418468 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-418468 -n embed-certs-418468
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-418468 -n embed-certs-418468
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.63s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
260 TestNetworkPlugins/group/kubenet 3.57
273 TestNetworkPlugins/group/cilium 3.24
280 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-411768 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-993774 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-993774" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 17:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.83:8443
name: cert-expiration-560919
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 17:31:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.228:8443
name: running-upgrade-912198
contexts:
- context:
cluster: cert-expiration-560919
extensions:
- extension:
last-update: Mon, 14 Apr 2025 17:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-560919
name: cert-expiration-560919
- context:
cluster: running-upgrade-912198
extensions:
- extension:
last-update: Mon, 14 Apr 2025 17:31:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-912198
name: running-upgrade-912198
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-560919
user:
client-certificate: /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt
client-key: /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key
- name: running-upgrade-912198
user:
client-certificate: /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/running-upgrade-912198/client.crt
client-key: /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/running-upgrade-912198/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-993774

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-993774"

                                                
                                                
----------------------- debugLogs end: kubenet-993774 [took: 3.421006143s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-993774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-993774
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-993774 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-993774" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 17:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.83:8443
name: cert-expiration-560919
contexts:
- context:
cluster: cert-expiration-560919
extensions:
- extension:
last-update: Mon, 14 Apr 2025 17:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-560919
name: cert-expiration-560919
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-560919
user:
client-certificate: /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.crt
client-key: /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/cert-expiration-560919/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-993774

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-993774" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-993774"

                                                
                                                
----------------------- debugLogs end: cilium-993774 [took: 3.103307105s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-993774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-993774
--- SKIP: TestNetworkPlugins/group/cilium (3.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-106293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-106293
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard